Url: A representation learning benchmark for transferable uncertainty estimates

M Kirchhof, B Mucsányi, SJ Oh… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Representation learning has significantly driven the field to develop pretrained
models that can act as a valuable starting point when transferring to new datasets. With the …

[HTML][HTML] Uncertainty quantification metrics for deep regression

SK Lind, Z Xiong, PE Forssén, V Krüger - Pattern Recognition Letters, 2024 - Elsevier
When deploying deep neural networks on robots or other physical systems, the learned
model should reliably quantify predictive uncertainty. A reliable uncertainty allows …

Taming CLIP for Fine-Grained and Structured Visual Understanding of Museum Exhibits

AA Balauca, DP Paudel, K Toutanova… - European Conference on …, 2025 - Springer
CLIP is a powerful and widely used tool for understanding images in the context of natural
language descriptions to perform nuanced tasks. However, it does not offer application …

Just say the name: Online continual learning with category names only via data generation

M Seo, S Cho, M Lee, D Misra, H Choi, SJ Kim… - arXiv preprint arXiv …, 2024 - arxiv.org
Requiring extensive human supervision is often impractical for continual learning due to its
cost, leading to the emergence of'name-only continual learning'that only provides the name …

Not all samples should be utilized equally: Towards understanding and improving dataset distillation

S Wang, Y Yang, Q Wang, K Li, L Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Dataset Distillation (DD) aims to synthesize a small dataset capable of performing
comparably to the original dataset. Despite the success of numerous DD methods …

Pretrained Visual Uncertainties

M Kirchhof, M Collier, SJ Oh, E Kasneci - arXiv preprint arXiv:2402.16569, 2024 - arxiv.org
Accurate uncertainty estimation is vital to trustworthy machine learning, yet uncertainties
typically have to be learned for each task anew. This work introduces the first pretrained …

Measuring Pointwise -Usable Information In-Context-ly

S Lu, S Chen, Y Li, D Bitterman, G Savova… - arXiv preprint arXiv …, 2023 - arxiv.org
In-context learning (ICL) is a new learning paradigm that has gained popularity along with
the development of large language models. In this work, we adapt a recently proposed …

Understanding the World's Museums through Vision-Language Reasoning

AA Balauca, S Garai, S Balauca, RU Shetty… - arXiv preprint arXiv …, 2024 - arxiv.org
Museums serve as vital repositories of cultural heritage and historical artifacts spanning
diverse epochs, civilizations, and regions, preserving well-documented collections. Data …

Slight Corruption in Pre-training Data Makes Better Diffusion Models

H Chen, Y Han, D Misra, X Li, K Hu, D Zou… - arXiv preprint arXiv …, 2024 - arxiv.org
Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-
quality images, audios, and videos. They benefit significantly from extensive pre-training on …

Identifying Task Groupings for Multi-Task Learning Using Pointwise V-Usable Information

Y Li, T Miller, S Bethard, G Savova - arXiv preprint arXiv:2410.12774, 2024 - arxiv.org
The success of multi-task learning can depend heavily on which tasks are grouped together.
Naively grouping all tasks or a random set of tasks can result in negative transfer, with the …