Ma-sam: Modality-agnostic sam adaptation for 3d medical image segmentation

C Chen, J Miao, D Wu, A Zhong, Z Yan, S Kim… - Medical Image …, 2024 - Elsevier
Abstract The Segment Anything Model (SAM), a foundation model for general image
segmentation, has demonstrated impressive zero-shot performance across numerous …

Foundation models for biomedical image segmentation: A survey

HH Lee, Y Gu, T Zhao, Y Xu, J Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent advancements in biomedical image analysis have been significantly driven by the
Segment Anything Model (SAM). This transformative technology, originally developed for …

Unleashing the potential of SAM for medical adaptation via hierarchical decoding

Z Cheng, Q Wei, H Zhu, Y Wang, L Qu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract The Segment Anything Model (SAM) has garnered significant attention for its
versatile segmentation abilities and intuitive prompt-based interface. However its application …

Segment anything model with uncertainty rectification for auto-prompting medical image segmentation

Y Zhang, S Hu, C Jiang, Y Cheng, Y Qi - arXiv preprint arXiv:2311.10529, 2023 - arxiv.org
The introduction of the Segment Anything Model (SAM) has marked a significant
advancement in prompt-driven image segmentation. However, SAM's application to medical …

APSeg: Auto-Prompt Network for Cross-Domain Few-Shot Semantic Segmentation

W He, Y Zhang, W Zhuo, L Shen… - Proceedings of the …, 2024 - openaccess.thecvf.com
Few-shot semantic segmentation (FSS) endeavors to segment unseen classes with only a
few labeled samples. Current FSS methods are commonly built on the assumption that their …

Interpretability-aware vision transformer

Y Qiang, C Li, P Khanduri, D Zhu - arXiv preprint arXiv:2309.08035, 2023 - arxiv.org
Vision Transformers (ViTs) have become prominent models for solving various vision tasks.
However, the interpretability of ViTs has not kept pace with their promising performance …

GeoSAM: Fine-tuning SAM with sparse and dense visual prompting for automated segmentation of mobility infrastructure

RI Sultan, C Li, H Zhu, P Khanduri, M Brocanelli… - arXiv preprint arXiv …, 2023 - arxiv.org
The Segment Anything Model (SAM) has shown impressive performance when applied to
natural image segmentation. However, it struggles with geographical images like aerial and …

[HTML][HTML] Semi-supervised segmentation for construction and demolition waste recognition in-the-wild: Adversarial dual-view networks

D Sirimewan, M Harandi, H Peiris… - Resources, Conservation …, 2024 - Elsevier
Precise, and automated segmentation of construction and demolition waste (CDW) is crucial
for recognizing the composition of mixed waste streams and facilitating automatic waste …

Is SAM 2 Better than SAM in Medical Image Segmentation?

S Sengupta, S Chakrabarty, R Soni - arXiv preprint arXiv:2408.04212, 2024 - arxiv.org
Segment Anything Model (SAM) demonstrated impressive performance in zero-shot
promptable segmentation on natural images. The recently released Segment Anything …

Segment Anything for Videos: A Systematic Survey

C Zhang, Y Cui, W Lin, G Huang, Y Rong, L Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
The recent wave of foundation models has witnessed tremendous success in computer
vision (CV) and beyond, with the segment anything model (SAM) having sparked a passion …