A comprehensive survey of continual learning: theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
Diffusion art or digital forgery? investigating data replication in diffusion models
Cutting-edge diffusion models produce images with high quality and customizability,
enabling them to be used for commercial art and graphic design purposes. But do diffusion …
enabling them to be used for commercial art and graphic design purposes. But do diffusion …
Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning
We present modality gap, an intriguing geometric phenomenon of the representation space
of multi-modal models. Specifically, we show that different data modalities (eg images and …
of multi-modal models. Specifically, we show that different data modalities (eg images and …
Pervasive label errors in test sets destabilize machine learning benchmarks
We identify label errors in the test sets of 10 of the most commonly-used computer vision,
natural language, and audio datasets, and subsequently study the potential for these label …
natural language, and audio datasets, and subsequently study the potential for these label …
Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
Learn from all: Erasing attention consistency for noisy label facial expression recognition
Abstract Noisy label Facial Expression Recognition (FER) is more challenging than
traditional noisy label classification tasks due to the inter-class similarity and the annotation …
traditional noisy label classification tasks due to the inter-class similarity and the annotation …
Towards unbounded machine unlearning
Deep machine unlearning is the problem of'removing'from a trained neural network a subset
of its training set. This problem is very timely and has many applications, including the key …
of its training set. This problem is very timely and has many applications, including the key …
Robust multi-view clustering with incomplete information
The success of existing multi-view clustering methods heavily relies on the assumption of
view consistency and instance completeness, referred to as the complete information …
view consistency and instance completeness, referred to as the complete information …
[HTML][HTML] Embracing change: Continual learning in deep neural networks
Artificial intelligence research has seen enormous progress over the past few decades, but it
predominantly relies on fixed datasets and stationary environments. Continual learning is an …
predominantly relies on fixed datasets and stationary environments. Continual learning is an …