Learning from noisy labels with deep neural networks: A survey

H Song, M Kim, D Park, Y Shin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …

Intelligent fault diagnosis of rolling bearing based on wavelet transform and improved ResNet under noisy labels and environment

P Liang, W Wang, X Yuan, S Liu, L Zhang… - … Applications of Artificial …, 2022 - Elsevier
The fault diagnosis (FD) of rolling bearing (RB) has a great significance in safe operation of
engineering equipment. Many intelligent diagnosis methods have been successfully …

Learning with noisy labels revisited: A study using real-world human annotations

J Wei, Z Zhu, H Cheng, T Liu, G Niu, Y Liu - arXiv preprint arXiv …, 2021 - arxiv.org
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
Synthetic noise, though has clean structures which greatly enabled statistical analyses, often …

Robust training under label noise by over-parameterization

S Liu, Z Zhu, Q Qu, C You - International Conference on …, 2022 - proceedings.mlr.press
Recently, over-parameterized deep networks, with increasingly more network parameters
than training samples, have dominated the performances of modern machine learning …

Does label smoothing mitigate label noise?

M Lukasik, S Bhojanapalli, A Menon… - … on Machine Learning, 2020 - proceedings.mlr.press
Label smoothing is commonly used in training deep learning models, wherein one-hot
training labels are mixed with uniform label vectors. Empirically, smoothing has been shown …

Learning with instance-dependent label noise: A sample sieve approach

H Cheng, Z Zhu, X Li, Y Gong, X Sun, Y Liu - arXiv preprint arXiv …, 2020 - arxiv.org
Human-annotated labels are often prone to noise, and the presence of such noise will
degrade the performance of the resulting deep neural network (DNN) models. Much of the …

Scarf: Self-supervised contrastive learning using random feature corruption

D Bahri, H Jiang, Y Tay, D Metzler - arXiv preprint arXiv:2106.15147, 2021 - arxiv.org
Self-supervised contrastive representation learning has proved incredibly successful in the
vision and natural language domains, enabling state-of-the-art performance with orders of …

Tempered sigmoid activations for deep learning with differential privacy

N Papernot, A Thakurta, S Song, S Chien… - Proceedings of the …, 2021 - ojs.aaai.org
Because learning sometimes involves sensitive data, machine learning algorithms have
been extended to offer differential privacy for training data. In practice, this has been mostly …

Large-scale differentially private BERT

R Anil, B Ghazi, V Gupta, R Kumar… - arXiv preprint arXiv …, 2021 - arxiv.org
In this work, we study the large-scale pretraining of BERT-Large with differentially private
SGD (DP-SGD). We show that combined with a careful implementation, scaling up the batch …

A second-order approach to learning with instance-dependent label noise

Z Zhu, T Liu, Y Liu - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
The presence of label noise often misleads the training of deep neural networks. Departing
from the recent literature which largely assumes the label noise rate is only determined by …