Symbols and mental programs: a hypothesis about human singularity

S Dehaene, F Al Roumi, Y Lakretz, S Planton… - Trends in Cognitive …, 2022 - cell.com
Natural language is often seen as the single factor that explains the cognitive singularity of
the human species. Instead, we propose that humans possess multiple internal languages …

Inductive biases for deep learning of higher-level cognition

A Goyal, Y Bengio - Proceedings of the Royal Society A, 2022 - royalsocietypublishing.org
A fascinating hypothesis is that human and animal intelligence could be explained by a few
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …

Trustworthy graph neural networks: Aspects, methods and trends

H Zhang, B Wu, X Yuan, S Pan, H Tong… - arXiv preprint arXiv …, 2022 - arxiv.org
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …

How does information bottleneck help deep learning?

K Kawaguchi, Z Deng, X Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …

The relational bottleneck as an inductive bias for efficient abstraction

TW Webb, SM Frankland, A Altabaa, S Segert… - Trends in Cognitive …, 2024 - cell.com
A central challenge for cognitive science is to explain how abstract concepts are acquired
from limited experience. This has often been framed in terms of a dichotomy between …

Learning invariant molecular representation in latent discrete space

X Zhuang, Q Zhang, K Ding, Y Bian… - Advances in …, 2024 - proceedings.neurips.cc
Molecular representation learning lays the foundation for drug discovery. However, existing
methods suffer from poor out-of-distribution (OOD) generalization, particularly when data for …

Improving compositional generalization using iterated learning and simplicial embeddings

Y Ren, S Lavoie, M Galkin… - Advances in …, 2024 - proceedings.neurips.cc
Compositional generalization, the ability of an agent to generalize to unseen combinations
of latent factors, is easy for humans but hard for deep neural networks. A line of research in …

Discrete key-value bottleneck

F Träuble, A Goyal, N Rahaman… - International …, 2023 - proceedings.mlr.press
Deep neural networks perform well on classification tasks where data streams are iid and
labeled data is abundant. Challenges emerge with non-stationary training data streams …

Neural systematic binder

G Singh, Y Kim, S Ahn - arXiv preprint arXiv:2211.01177, 2022 - arxiv.org
The key to high-level cognition is believed to be the ability to systematically manipulate and
compose knowledge pieces. While token-like structured knowledge representations are …

From machine learning to robotics: Challenges and opportunities for embodied intelligence

N Roy, I Posner, T Barfoot, P Beaudoin… - arXiv preprint arXiv …, 2021 - arxiv.org
Machine learning has long since become a keystone technology, accelerating science and
applications in a broad range of domains. Consequently, the notion of applying learning …