Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis

D Karimi, H Dou, SK Warfield, A Gholipour - Medical image analysis, 2020 - Elsevier
Supervised training of deep learning models requires large labeled datasets. There is a
growing interest in obtaining such datasets for medical image analysis applications …

Towards robust pattern recognition: A review

XY Zhang, CL Liu, CY Suen - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
The accuracies for many pattern recognition tasks have increased rapidly year by year,
achieving or even outperforming human performance. From the perspective of accuracy …

Part-based pseudo label refinement for unsupervised person re-identification

Y Cho, WJ Kim, S Hong… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Unsupervised person re-identification (re-ID) aims at learning discriminative representations
for person retrieval from unlabeled data. Recent techniques accomplish this task by using …

Partial success in closing the gap between human and machine vision

R Geirhos, K Narayanappa, B Mitzkus… - Advances in …, 2021 - proceedings.neurips.cc
A few years ago, the first CNN surpassed human performance on ImageNet. However, it
soon became clear that machines lack robustness on more challenging test cases, a major …

Dividemix: Learning with noisy labels as semi-supervised learning

J Li, R Socher, SCH Hoi - arXiv preprint arXiv:2002.07394, 2020 - arxiv.org
Deep neural networks are known to be annotation-hungry. Numerous efforts have been
devoted to reducing the annotation cost when learning with deep networks. Two prominent …

Symmetric cross entropy for robust learning with noisy labels

Y Wang, X Ma, Z Chen, Y Luo, J Yi… - Proceedings of the …, 2019 - openaccess.thecvf.com
Training accurate deep neural networks (DNNs) in the presence of noisy labels is an
important and challenging task. Though a number of approaches have been proposed for …

Confident learning: Estimating uncertainty in dataset labels

C Northcutt, L Jiang, I Chuang - Journal of Artificial Intelligence Research, 2021 - jair.org
Learning exists in the context of data, yet notions of confidence typically focus on model
predictions, not label quality. Confident learning (CL) is an alternative approach which …

Normalized loss functions for deep learning with noisy labels

X Ma, H Huang, Y Wang, S Romano… - International …, 2020 - proceedings.mlr.press
Robust loss functions are essential for training accurate deep neural networks (DNNs) in the
presence of noisy (incorrect) labels. It has been shown that the commonly used Cross …

Meta-weight-net: Learning an explicit mapping for sample weighting

J Shu, Q Xie, L Yi, Q Zhao, S Zhou… - Advances in neural …, 2019 - proceedings.neurips.cc
Current deep neural networks (DNNs) can easily overfit to biased training data with
corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to …

Unsupervised label noise modeling and loss correction

E Arazo, D Ortego, P Albert… - International …, 2019 - proceedings.mlr.press
Despite being robust to small amounts of label noise, convolutional neural networks trained
with stochastic gradient methods have been shown to easily fit random labels. When there …