Towards a general-purpose foundation model for computational pathology

RJ Chen, T Ding, MY Lu, DFK Williamson, G Jaume… - Nature Medicine, 2024 - nature.com
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks,
requiring the objective characterization of histopathological entities from whole-slide images …

Emerging properties in self-supervised vision transformers

M Caron, H Touvron, I Misra, H Jégou… - Proceedings of the …, 2021 - openaccess.thecvf.com
In this paper, we question if self-supervised learning provides new properties to Vision
Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the …

R-drop: Regularized dropout for neural networks

L Wu, J Li, Y Wang, Q Meng, T Qin… - Advances in …, 2021 - proceedings.neurips.cc
Dropout is a powerful and widely used technique to regularize the training of deep neural
networks. Though effective and performing well, the randomness introduced by dropout …

Cross-image relational knowledge distillation for semantic segmentation

C Yang, H Zhou, Z An, X Jiang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Current Knowledge Distillation (KD) methods for semantic segmentation often
guide the student to mimic the teacher's structured information generated from individual …

Co2l: Contrastive continual learning

H Cha, J Lee, J Shin - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Recent breakthroughs in self-supervised learning show that such algorithms learn visual
representations that can be transferred better to unseen tasks than cross-entropy based …

Distilling large vision-language model with out-of-distribution generalizability

X Li, Y Fang, M Liu, Z Ling, Z Tu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Large vision-language models have achieved outstanding performance, but their size and
computational requirements make their deployment on resource-constrained devices and …

Masked video distillation: Rethinking masked feature modeling for self-supervised video representation learning

R Wang, D Chen, Z Wu, Y Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Benefiting from masked visual modeling, self-supervised video representation learning has
achieved remarkable progress. However, existing methods focus on learning …

Online prototype learning for online continual learning

Y Wei, J Ye, Z Huang, J Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Online continual learning (CL) studies the problem of learning continuously from a single-
pass data stream while adapting to new data and mitigating catastrophic forgetting …

Pyramidclip: Hierarchical feature alignment for vision-language model pretraining

Y Gao, J Liu, Z Xu, J Zhang, K Li… - Advances in neural …, 2022 - proceedings.neurips.cc
Large-scale vision-language pre-training has achieved promising results on downstream
tasks. Existing methods highly rely on the assumption that the image-text pairs crawled from …

Ressl: Relational self-supervised learning with weak augmentation

M Zheng, S You, F Wang, C Qian… - Advances in …, 2021 - proceedings.neurips.cc
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved
great success in learning visual representations without data annotations. However, most of …