Battle of the backbones: A large-scale comparison of pretrained models across computer vision tasks

M Goldblum, H Souri, R Ni, M Shu… - Advances in …, 2024 - proceedings.neurips.cc
Neural network based computer vision systems are typically built on a backbone, a
pretrained or randomly initialized feature extractor. Several years ago, the default option was …

Promptstyler: Prompt-driven style generation for source-free domain generalization

J Cho, G Nam, S Kim, H Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In a joint vision-language space, a text feature (eg, from" a photo of a dog") could effectively
represent its relevant image features (eg, from dog photos). Also, a recent study has …

The emergence of essential sparsity in large pre-trained models: The weights that matter

A Jaiswal, S Liu, T Chen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Large pre-trained transformers are $\textit {show-stealer} $ in modern-day deep learning,
and it becomes crucial to comprehend the parsimonious patterns that exist within them as …

Read-only prompt optimization for vision-language few-shot learning

D Lee, S Song, J Suh, J Choi… - Proceedings of the …, 2023 - openaccess.thecvf.com
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language
models to down-stream tasks. These methods aim to adapt the pre-trained models by …

Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations

JZ HaoChen, C Wei, A Kumar… - Advances in neural …, 2022 - proceedings.neurips.cc
Contrastive learning is a highly effective method for learning representations from unlabeled
data. Recent works show that contrastive representations can transfer across domains …

Borrowing knowledge from pre-trained language model: A new data-efficient visual learning paradigm

W Ma, S Li, JM Zhang, CH Liu, J Kang… - Proceedings of the …, 2023 - openaccess.thecvf.com
The development of vision models for real-world applications is hindered by the challenge of
annotated data scarcity, which has necessitated the adoption of data-efficient visual learning …

Simple: Specialized model-sample matching for domain generalization

Z Li, K Ren, X Jiang, Y Shen, H Zhang… - … Conference on Learning …, 2023 - openreview.net
In domain generalization (DG), most existing methods aspire to fine-tune a specific
pretrained model through novel DG algorithms. In this paper, we propose an alternative …

Clust3: Information invariant test-time training

GAV Hakim, D Osowiechi, M Noori… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep Learning models have shown remarkable performance in a broad range of vision
tasks. However, they are often vulnerable against domain shifts at test-time. Test-time …

Geonet: Benchmarking unsupervised adaptation across geographies

T Kalluri, W Xu, M Chandraker - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In recent years, several efforts have been aimed at improving the robustness of vision
models to domains and environments unseen during training. An important practical …

Open-source AI-based SE tools: opportunities and challenges of collaborative software learning

Z Lin, W Ma, T Lin, Y Zheng, J Ge, J Wang… - ACM Transactions on …, 2024 - dl.acm.org
Large Language Models (LLMs) have become instrumental in advancing software
engineering (SE) tasks, showcasing their efficacy in code understanding and beyond. AI …