Learning from noisy labels with deep neural networks: A survey
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …
amounts of big data. However, the quality of data labels is a concern because of the lack of …
Intelligent fault diagnosis of rolling bearing based on wavelet transform and improved ResNet under noisy labels and environment
P Liang, W Wang, X Yuan, S Liu, L Zhang… - … Applications of Artificial …, 2022 - Elsevier
The fault diagnosis (FD) of rolling bearing (RB) has a great significance in safe operation of
engineering equipment. Many intelligent diagnosis methods have been successfully …
engineering equipment. Many intelligent diagnosis methods have been successfully …
Learning with noisy labels revisited: A study using real-world human annotations
Existing research on learning with noisy labels mainly focuses on synthetic label noise.
Synthetic noise, though has clean structures which greatly enabled statistical analyses, often …
Synthetic noise, though has clean structures which greatly enabled statistical analyses, often …
Robust training under label noise by over-parameterization
Recently, over-parameterized deep networks, with increasingly more network parameters
than training samples, have dominated the performances of modern machine learning …
than training samples, have dominated the performances of modern machine learning …
Does label smoothing mitigate label noise?
Label smoothing is commonly used in training deep learning models, wherein one-hot
training labels are mixed with uniform label vectors. Empirically, smoothing has been shown …
training labels are mixed with uniform label vectors. Empirically, smoothing has been shown …
Learning with instance-dependent label noise: A sample sieve approach
Human-annotated labels are often prone to noise, and the presence of such noise will
degrade the performance of the resulting deep neural network (DNN) models. Much of the …
degrade the performance of the resulting deep neural network (DNN) models. Much of the …
Scarf: Self-supervised contrastive learning using random feature corruption
Self-supervised contrastive representation learning has proved incredibly successful in the
vision and natural language domains, enabling state-of-the-art performance with orders of …
vision and natural language domains, enabling state-of-the-art performance with orders of …
Tempered sigmoid activations for deep learning with differential privacy
N Papernot, A Thakurta, S Song, S Chien… - Proceedings of the …, 2021 - ojs.aaai.org
Because learning sometimes involves sensitive data, machine learning algorithms have
been extended to offer differential privacy for training data. In practice, this has been mostly …
been extended to offer differential privacy for training data. In practice, this has been mostly …
Large-scale differentially private BERT
In this work, we study the large-scale pretraining of BERT-Large with differentially private
SGD (DP-SGD). We show that combined with a careful implementation, scaling up the batch …
SGD (DP-SGD). We show that combined with a careful implementation, scaling up the batch …
A second-order approach to learning with instance-dependent label noise
The presence of label noise often misleads the training of deep neural networks. Departing
from the recent literature which largely assumes the label noise rate is only determined by …
from the recent literature which largely assumes the label noise rate is only determined by …