Url: A representation learning benchmark for transferable uncertainty estimates
Abstract Representation learning has significantly driven the field to develop pretrained
models that can act as a valuable starting point when transferring to new datasets. With the …
models that can act as a valuable starting point when transferring to new datasets. With the …
[HTML][HTML] Uncertainty quantification metrics for deep regression
When deploying deep neural networks on robots or other physical systems, the learned
model should reliably quantify predictive uncertainty. A reliable uncertainty allows …
model should reliably quantify predictive uncertainty. A reliable uncertainty allows …
Taming CLIP for Fine-Grained and Structured Visual Understanding of Museum Exhibits
CLIP is a powerful and widely used tool for understanding images in the context of natural
language descriptions to perform nuanced tasks. However, it does not offer application …
language descriptions to perform nuanced tasks. However, it does not offer application …
Just say the name: Online continual learning with category names only via data generation
Requiring extensive human supervision is often impractical for continual learning due to its
cost, leading to the emergence of'name-only continual learning'that only provides the name …
cost, leading to the emergence of'name-only continual learning'that only provides the name …
Not all samples should be utilized equally: Towards understanding and improving dataset distillation
Dataset Distillation (DD) aims to synthesize a small dataset capable of performing
comparably to the original dataset. Despite the success of numerous DD methods …
comparably to the original dataset. Despite the success of numerous DD methods …
Pretrained Visual Uncertainties
Accurate uncertainty estimation is vital to trustworthy machine learning, yet uncertainties
typically have to be learned for each task anew. This work introduces the first pretrained …
typically have to be learned for each task anew. This work introduces the first pretrained …
Measuring Pointwise -Usable Information In-Context-ly
In-context learning (ICL) is a new learning paradigm that has gained popularity along with
the development of large language models. In this work, we adapt a recently proposed …
the development of large language models. In this work, we adapt a recently proposed …
Understanding the World's Museums through Vision-Language Reasoning
AA Balauca, S Garai, S Balauca, RU Shetty… - arXiv preprint arXiv …, 2024 - arxiv.org
Museums serve as vital repositories of cultural heritage and historical artifacts spanning
diverse epochs, civilizations, and regions, preserving well-documented collections. Data …
diverse epochs, civilizations, and regions, preserving well-documented collections. Data …
Slight Corruption in Pre-training Data Makes Better Diffusion Models
Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-
quality images, audios, and videos. They benefit significantly from extensive pre-training on …
quality images, audios, and videos. They benefit significantly from extensive pre-training on …
Identifying Task Groupings for Multi-Task Learning Using Pointwise V-Usable Information
The success of multi-task learning can depend heavily on which tasks are grouped together.
Naively grouping all tasks or a random set of tasks can result in negative transfer, with the …
Naively grouping all tasks or a random set of tasks can result in negative transfer, with the …