Clip in medical imaging: A comprehensive survey

Z Zhao, Y Liu, H Wu, Y Li, S Wang, L Teng… - arXiv preprint arXiv …, 2023 - arxiv.org
Contrastive Language-Image Pre-training (CLIP), a straightforward yet effective pre-training
paradigm, successfully introduces semantic-rich text supervision to vision models and has …

[HTML][HTML] Advancing medical imaging with language models: featuring a spotlight on ChatGPT

M Hu, J Qian, S Pan, Y Li, RLJ Qiu… - Physics in Medicine & …, 2024 - iopscience.iop.org
This review paper aims to serve as a comprehensive guide and instructional resource for
researchers seeking to effectively implement language models in medical imaging research …

Driving through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving

J Echterhoff, A Yan, K Han… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Concept bottleneck models have been successfully used for explainable machine
learning by encoding information within the model with a set of human-defined concepts. In …

A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law

ZZ Chen, J Ma, X Zhang, N Hao, A Yan… - arXiv preprint arXiv …, 2024 - arxiv.org
In the fast-evolving domain of artificial intelligence, large language models (LLMs) such as
GPT-3 and GPT-4 are revolutionizing the landscapes of finance, healthcare, and law …

Driving through the concept gridlock: Unraveling explainability bottlenecks

J Echterhoff, A Yan, K Han, A Abdelraouf… - arXiv preprint arXiv …, 2023 - arxiv.org
Concept bottleneck models have been successfully used for explainable machine learning
by encoding information within the model with a set of human-defined concepts. In the …

MMGPL: Multimodal Medical Data Analysis with Graph Prompt Learning

L Peng, S Cai, Z Wu, H Shang, X Zhu, X Li - Medical Image Analysis, 2024 - Elsevier
Prompt learning has demonstrated impressive efficacy in the fine-tuning of multimodal large
models to a wide range of downstream tasks. Nonetheless, applying existing prompt …

CLIP-QDA: An explainable concept bottleneck model

R Kazmierczak, E Berthier, G Frehse… - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we introduce an explainable algorithm designed from a multi-modal foundation
model, that performs fast and explainable image classification. Drawing inspiration from …

Towards concept-based interpretability of skin lesion diagnosis using vision-language models

C Patrício, LF Teixeira, JC Neves - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Concept-based models naturally lend themselves to the development of inherently
interpretable skin lesion diagnosis, as medical experts make decisions based on a set of …

Pre-trained Vision-Language Models Learn Discoverable Visual Concepts

Y Zang, T Yun, H Tan, T Bui, C Sun - arXiv preprint arXiv:2404.12652, 2024 - arxiv.org
Do vision-language models (VLMs) pre-trained to caption an image of a" durian" learn visual
concepts such as" brown"(color) and" spiky"(texture) at the same time? We aim to answer …

A Survey on Trustworthiness in Foundation Models for Medical Image Analysis

C Shi, R Rezai, J Yang, Q Dou, X Li - arXiv preprint arXiv:2407.15851, 2024 - arxiv.org
The rapid advancement of foundation models in medical imaging represents a significant
leap toward enhancing diagnostic accuracy and personalized treatment. However, the …