Clip in medical imaging: A comprehensive survey
Contrastive Language-Image Pre-training (CLIP), a straightforward yet effective pre-training
paradigm, successfully introduces semantic-rich text supervision to vision models and has …
paradigm, successfully introduces semantic-rich text supervision to vision models and has …
[HTML][HTML] Advancing medical imaging with language models: featuring a spotlight on ChatGPT
This review paper aims to serve as a comprehensive guide and instructional resource for
researchers seeking to effectively implement language models in medical imaging research …
researchers seeking to effectively implement language models in medical imaging research …
Driving through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving
Abstract Concept bottleneck models have been successfully used for explainable machine
learning by encoding information within the model with a set of human-defined concepts. In …
learning by encoding information within the model with a set of human-defined concepts. In …
A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law
In the fast-evolving domain of artificial intelligence, large language models (LLMs) such as
GPT-3 and GPT-4 are revolutionizing the landscapes of finance, healthcare, and law …
GPT-3 and GPT-4 are revolutionizing the landscapes of finance, healthcare, and law …
Driving through the concept gridlock: Unraveling explainability bottlenecks
Concept bottleneck models have been successfully used for explainable machine learning
by encoding information within the model with a set of human-defined concepts. In the …
by encoding information within the model with a set of human-defined concepts. In the …
MMGPL: Multimodal Medical Data Analysis with Graph Prompt Learning
Prompt learning has demonstrated impressive efficacy in the fine-tuning of multimodal large
models to a wide range of downstream tasks. Nonetheless, applying existing prompt …
models to a wide range of downstream tasks. Nonetheless, applying existing prompt …
CLIP-QDA: An explainable concept bottleneck model
In this paper, we introduce an explainable algorithm designed from a multi-modal foundation
model, that performs fast and explainable image classification. Drawing inspiration from …
model, that performs fast and explainable image classification. Drawing inspiration from …
Towards concept-based interpretability of skin lesion diagnosis using vision-language models
Concept-based models naturally lend themselves to the development of inherently
interpretable skin lesion diagnosis, as medical experts make decisions based on a set of …
interpretable skin lesion diagnosis, as medical experts make decisions based on a set of …
Pre-trained Vision-Language Models Learn Discoverable Visual Concepts
Do vision-language models (VLMs) pre-trained to caption an image of a" durian" learn visual
concepts such as" brown"(color) and" spiky"(texture) at the same time? We aim to answer …
concepts such as" brown"(color) and" spiky"(texture) at the same time? We aim to answer …
A Survey on Trustworthiness in Foundation Models for Medical Image Analysis
The rapid advancement of foundation models in medical imaging represents a significant
leap toward enhancing diagnostic accuracy and personalized treatment. However, the …
leap toward enhancing diagnostic accuracy and personalized treatment. However, the …