A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

Diffusion art or digital forgery? investigating data replication in diffusion models

G Somepalli, V Singla, M Goldblum… - Proceedings of the …, 2023 - openaccess.thecvf.com
Cutting-edge diffusion models produce images with high quality and customizability,
enabling them to be used for commercial art and graphic design purposes. But do diffusion …

Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning

VW Liang, Y Zhang, Y Kwon… - Advances in Neural …, 2022 - proceedings.neurips.cc
We present modality gap, an intriguing geometric phenomenon of the representation space
of multi-modal models. Specifically, we show that different data modalities (eg images and …

Pervasive label errors in test sets destabilize machine learning benchmarks

CG Northcutt, A Athalye, J Mueller - arXiv preprint arXiv:2103.14749, 2021 - arxiv.org
We identify label errors in the test sets of 10 of the most commonly-used computer vision,
natural language, and audio datasets, and subsequently study the potential for these label …

Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

Learn from all: Erasing attention consistency for noisy label facial expression recognition

Y Zhang, C Wang, X Ling, W Deng - European Conference on Computer …, 2022 - Springer
Abstract Noisy label Facial Expression Recognition (FER) is more challenging than
traditional noisy label classification tasks due to the inter-class similarity and the annotation …

Towards unbounded machine unlearning

M Kurmanji, P Triantafillou, J Hayes… - Advances in neural …, 2024 - proceedings.neurips.cc
Deep machine unlearning is the problem of'removing'from a trained neural network a subset
of its training set. This problem is very timely and has many applications, including the key …

Robust multi-view clustering with incomplete information

M Yang, Y Li, P Hu, J Bai, J Lv… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The success of existing multi-view clustering methods heavily relies on the assumption of
view consistency and instance completeness, referred to as the complete information …

[HTML][HTML] Embracing change: Continual learning in deep neural networks

R Hadsell, D Rao, AA Rusu, R Pascanu - Trends in cognitive sciences, 2020 - cell.com
Artificial intelligence research has seen enormous progress over the past few decades, but it
predominantly relies on fixed datasets and stationary environments. Continual learning is an …