A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities
Few-shot learning (FSL) has emerged as an effective learning method and shows great
potential. Despite the recent creative works in tackling FSL tasks, learning valid information …
potential. Despite the recent creative works in tackling FSL tasks, learning valid information …
Are we learning yet? a meta review of evaluation failures across machine learning
Many subfields of machine learning share a common stumbling block: evaluation. Advances
in machine learning often evaporate under closer scrutiny or turn out to be less widely …
in machine learning often evaporate under closer scrutiny or turn out to be less widely …
Flamingo: a visual language model for few-shot learning
Building models that can be rapidly adapted to novel tasks using only a handful of annotated
examples is an open challenge for multimodal machine learning research. We introduce …
examples is an open challenge for multimodal machine learning research. We introduce …
Towards a general-purpose foundation model for computational pathology
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks,
requiring the objective characterization of histopathological entities from whole-slide images …
requiring the objective characterization of histopathological entities from whole-slide images …
Stablerep: Synthetic images from text-to-image models make strong visual representation learners
We investigate the potential of learning visual representations using synthetic images
generated by text-to-image models. This is a natural question in the light of the excellent …
generated by text-to-image models. This is a natural question in the light of the excellent …
Language in a bottle: Language model guided concept bottlenecks for interpretable image classification
Abstract Concept Bottleneck Models (CBM) are inherently interpretable models that factor
model decisions into human-readable concepts. They allow people to easily understand …
model decisions into human-readable concepts. They allow people to easily understand …
Tip-adapter: Training-free adaption of clip for few-shot classification
Abstract Contrastive Vision-Language Pre-training, known as CLIP, has provided a new
paradigm for learning visual representations using large-scale image-text pairs. It shows …
paradigm for learning visual representations using large-scale image-text pairs. It shows …
Prompt distribution learning
We present prompt distribution learning for effectively adapting a pre-trained vision-
language model to address downstream recognition tasks. Our method not only learns low …
language model to address downstream recognition tasks. Our method not only learns low …
Learning to prompt for vision-language models
Large pre-trained vision-language models like CLIP have shown great potential in learning
representations that are transferable across a wide range of downstream tasks. Different …
representations that are transferable across a wide range of downstream tasks. Different …
Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding
M Afham, I Dissanayake… - Proceedings of the …, 2022 - openaccess.thecvf.com
Manual annotation of large-scale point cloud dataset for varying tasks such as 3D object
classification, segmentation and detection is often laborious owing to the irregular structure …
classification, segmentation and detection is often laborious owing to the irregular structure …