Symbols and mental programs: a hypothesis about human singularity
Natural language is often seen as the single factor that explains the cognitive singularity of
the human species. Instead, we propose that humans possess multiple internal languages …
the human species. Instead, we propose that humans possess multiple internal languages …
Inductive biases for deep learning of higher-level cognition
A fascinating hypothesis is that human and animal intelligence could be explained by a few
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …
principles (rather than an encyclopaedic list of heuristics). If that hypothesis was correct, we …
Trustworthy graph neural networks: Aspects, methods and trends
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …
methods for diverse real-world scenarios, ranging from daily applications like …
How does information bottleneck help deep learning?
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …
information bottleneck, where unnecessary information is (often implicitly) minimized while …
The relational bottleneck as an inductive bias for efficient abstraction
A central challenge for cognitive science is to explain how abstract concepts are acquired
from limited experience. This has often been framed in terms of a dichotomy between …
from limited experience. This has often been framed in terms of a dichotomy between …
Learning invariant molecular representation in latent discrete space
Molecular representation learning lays the foundation for drug discovery. However, existing
methods suffer from poor out-of-distribution (OOD) generalization, particularly when data for …
methods suffer from poor out-of-distribution (OOD) generalization, particularly when data for …
Improving compositional generalization using iterated learning and simplicial embeddings
Compositional generalization, the ability of an agent to generalize to unseen combinations
of latent factors, is easy for humans but hard for deep neural networks. A line of research in …
of latent factors, is easy for humans but hard for deep neural networks. A line of research in …
Discrete key-value bottleneck
Deep neural networks perform well on classification tasks where data streams are iid and
labeled data is abundant. Challenges emerge with non-stationary training data streams …
labeled data is abundant. Challenges emerge with non-stationary training data streams …
Neural systematic binder
The key to high-level cognition is believed to be the ability to systematically manipulate and
compose knowledge pieces. While token-like structured knowledge representations are …
compose knowledge pieces. While token-like structured knowledge representations are …
From machine learning to robotics: Challenges and opportunities for embodied intelligence
Machine learning has long since become a keystone technology, accelerating science and
applications in a broad range of domains. Consequently, the notion of applying learning …
applications in a broad range of domains. Consequently, the notion of applying learning …