Learning with limited annotations: a survey on deep semi-supervised learning for medical image segmentation

R Jiao, Y Zhang, L Ding, B Xue, J Zhang, R Cai… - Computers in Biology …, 2024 - Elsevier
Medical image segmentation is a fundamental and critical step in many image-guided
clinical approaches. Recent success of deep learning-based segmentation methods usually …

Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks

L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …

Decoupled knowledge distillation

B Zhao, Q Cui, R Song, Y Qiu… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
State-of-the-art distillation methods are mainly based on distilling deep features from
intermediate layers, while the significance of logit distillation is greatly overlooked. To …

Revisiting weak-to-strong consistency in semi-supervised semantic segmentation

L Yang, L Qi, L Feng, W Zhang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch
from semi-supervised classification, where the prediction of a weakly perturbed image …

Federated learning on non-IID data: A survey

H Zhu, J Xu, S Liu, Y Jin - Neurocomputing, 2021 - Elsevier
Federated learning is an emerging distributed machine learning framework for privacy
preservation. However, models trained in federated learning usually have worse …

R-drop: Regularized dropout for neural networks

L Wu, J Li, Y Wang, Q Meng, T Qin… - Advances in …, 2021 - proceedings.neurips.cc
Dropout is a powerful and widely used technique to regularize the training of deep neural
networks. Though effective and performing well, the randomness introduced by dropout …

Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data

J Huang, D Guan, A Xiao, S Lu - Advances in neural …, 2021 - proceedings.neurips.cc
Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled
target domain, but it requires to access the source data which often raises concerns in data …

Communication-efficient federated learning via knowledge distillation

C Wu, F Wu, L Lyu, Y Huang, X Xie - Nature communications, 2022 - nature.com
Federated learning is a privacy-preserving machine learning technique to train intelligent
models from decentralized data, which enables exploiting private data by communicating …

Stylizednerf: consistent 3d scene stylization as stylized nerf via 2d-3d mutual learning

YH Huang, Y He, YJ Yuan, YK Lai… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract 3D scene stylization aims at generating stylized images of the scene from arbitrary
novel views following a given set of style examples, while ensuring consistency when …

Knowledge distillation: A survey

J Gou, B Yu, SJ Maybank, D Tao - International Journal of Computer Vision, 2021 - Springer
In recent years, deep neural networks have been successful in both industry and academia,
especially for computer vision tasks. The great success of deep learning is mainly due to its …