Battle of the backbones: A large-scale comparison of pretrained models across computer vision tasks
Neural network based computer vision systems are typically built on a backbone, a
pretrained or randomly initialized feature extractor. Several years ago, the default option was …
pretrained or randomly initialized feature extractor. Several years ago, the default option was …
Promptstyler: Prompt-driven style generation for source-free domain generalization
In a joint vision-language space, a text feature (eg, from" a photo of a dog") could effectively
represent its relevant image features (eg, from dog photos). Also, a recent study has …
represent its relevant image features (eg, from dog photos). Also, a recent study has …
The emergence of essential sparsity in large pre-trained models: The weights that matter
Large pre-trained transformers are $\textit {show-stealer} $ in modern-day deep learning,
and it becomes crucial to comprehend the parsimonious patterns that exist within them as …
and it becomes crucial to comprehend the parsimonious patterns that exist within them as …
Read-only prompt optimization for vision-language few-shot learning
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language
models to down-stream tasks. These methods aim to adapt the pre-trained models by …
models to down-stream tasks. These methods aim to adapt the pre-trained models by …
Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations
Contrastive learning is a highly effective method for learning representations from unlabeled
data. Recent works show that contrastive representations can transfer across domains …
data. Recent works show that contrastive representations can transfer across domains …
Borrowing knowledge from pre-trained language model: A new data-efficient visual learning paradigm
The development of vision models for real-world applications is hindered by the challenge of
annotated data scarcity, which has necessitated the adoption of data-efficient visual learning …
annotated data scarcity, which has necessitated the adoption of data-efficient visual learning …
Simple: Specialized model-sample matching for domain generalization
In domain generalization (DG), most existing methods aspire to fine-tune a specific
pretrained model through novel DG algorithms. In this paper, we propose an alternative …
pretrained model through novel DG algorithms. In this paper, we propose an alternative …
Clust3: Information invariant test-time training
Deep Learning models have shown remarkable performance in a broad range of vision
tasks. However, they are often vulnerable against domain shifts at test-time. Test-time …
tasks. However, they are often vulnerable against domain shifts at test-time. Test-time …
Geonet: Benchmarking unsupervised adaptation across geographies
T Kalluri, W Xu, M Chandraker - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In recent years, several efforts have been aimed at improving the robustness of vision
models to domains and environments unseen during training. An important practical …
models to domains and environments unseen during training. An important practical …
Open-source AI-based SE tools: opportunities and challenges of collaborative software learning
Large Language Models (LLMs) have become instrumental in advancing software
engineering (SE) tasks, showcasing their efficacy in code understanding and beyond. AI …
engineering (SE) tasks, showcasing their efficacy in code understanding and beyond. AI …