Reinforcement learning, fast and slow

M Botvinick, S Ritter, JX Wang, Z Kurth-Nelson… - Trends in cognitive …, 2019 - cell.com
Deep reinforcement learning (RL) methods have driven impressive advances in artificial
intelligence in recent years, exceeding human performance in domains ranging from Atari to …

Deep learning in electron microscopy

JM Ede - Machine Learning: Science and Technology, 2021 - iopscience.iop.org
Deep learning is transforming most areas of science and technology, including electron
microscopy. This review paper offers a practical perspective aimed at developers with …

Advancing neuromorphic computing with loihi: A survey of results and outlook

M Davies, A Wild, G Orchard… - Proceedings of the …, 2021 - ieeexplore.ieee.org
Deep artificial neural networks apply principles of the brain's information processing that led
to breakthroughs in machine learning spanning many problem domains. Neuromorphic …

2022 roadmap on neuromorphic computing and engineering

DV Christensen, R Dittmann… - Neuromorphic …, 2022 - iopscience.iop.org
Modern computation based on von Neumann architecture is now a mature cutting-edge
science. In the von Neumann architecture, processing and memory units are implemented …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

A solution to the learning dilemma for recurrent networks of spiking neurons

G Bellec, F Scherr, A Subramoney, E Hajek… - Nature …, 2020 - nature.com
Recurrently connected networks of spiking neurons underlie the astounding information
processing capabilities of the brain. Yet in spite of extensive research, how they can learn …

You only propagate once: Accelerating adversarial training via maximal principle

D Zhang, T Zhang, Y Lu, Z Zhu… - Advances in neural …, 2019 - proceedings.neurips.cc
Deep learning achieves state-of-the-art results in many tasks in computer vision and natural
language processing. However, recent works have shown that deep networks can be …

Just pick a sign: Optimizing deep multitask models with gradient sign dropout

Z Chen, J Ngiam, Y Huang, T Luong… - Advances in …, 2020 - proceedings.neurips.cc
The vast majority of deep models use multiple gradient signals, typically corresponding to a
sum of multiple loss terms, to update a shared set of trainable weights. However, these …

Hypernetworks

D Ha, A Dai, QV Le - arXiv preprint arXiv:1609.09106, 2016 - arxiv.org
This work explores hypernetworks: an approach of using a one network, also known as a
hypernetwork, to generate the weights for another network. Hypernetworks provide an …

Learning to perform local rewriting for combinatorial optimization

X Chen, Y Tian - Advances in neural information …, 2019 - proceedings.neurips.cc
Search-based methods for hard combinatorial optimization are often guided by heuristics.
Tuning heuristics in various conditions and situations is often time-consuming. In this paper …