Physics for neuromorphic computing

D Marković, A Mizrahi, D Querlioz, J Grollier - Nature Reviews Physics, 2020 - nature.com
Neuromorphic computing takes inspiration from the brain to create energy-efficient hardware
for information processing, capable of highly sophisticated tasks. Systems built with standard …

Designing neural networks through neuroevolution

KO Stanley, J Clune, J Lehman… - Nature Machine …, 2019 - nature.com
Much of recent machine learning has focused on deep learning, in which neural network
weights are trained through variants of stochastic gradient descent. An alternative approach …

[HTML][HTML] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

S Ali, T Abuhmed, S El-Sappagh, K Muhammad… - Information fusion, 2023 - Elsevier
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated
applications, but the outcomes of many AI models are challenging to comprehend and trust …

Multi-concept customization of text-to-image diffusion

N Kumari, B Zhang, R Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …

Three types of incremental learning

GM Van de Ven, T Tuytelaars, AS Tolias - Nature Machine Intelligence, 2022 - nature.com
Incrementally learning new information from a non-stationary stream of data, referred to as
'continual learning', is a key feature of natural intelligence, but a challenging problem for …

Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need

DW Zhou, ZW Cai, HJ Ye, DC Zhan, Z Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting
old ones. Traditional CIL models are trained from scratch to continually acquire knowledge …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Foster: Feature boosting and compression for class-incremental learning

FY Wang, DW Zhou, HJ Ye, DC Zhan - European conference on computer …, 2022 - Springer
The ability to learn new concepts continually is necessary in this ever-changing world.
However, deep neural networks suffer from catastrophic forgetting when learning new …

Foundational challenges in assuring alignment and safety of large language models

U Anwar, A Saparov, J Rando, D Paleka… - arXiv preprint arXiv …, 2024 - arxiv.org
This work identifies 18 foundational challenges in assuring the alignment and safety of large
language models (LLMs). These challenges are organized into three different categories …

Dytox: Transformers for continual learning with dynamic token expansion

A Douillard, A Ramé, G Couairon… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep network architectures struggle to continually learn new tasks without forgetting the
previous tasks. A recent trend indicates that dynamic architectures based on an expansion …