Classification with deep neural networks and logistic loss
Z Zhang, L Shi, DX Zhou - Journal of Machine Learning Research, 2024 - jmlr.org
Deep neural networks (DNNs) trained with the logistic loss (also known as the cross entropy
loss) have made impressive advancements in various binary classification tasks. Despite the …
loss) have made impressive advancements in various binary classification tasks. Despite the …
The implicit bias of benign overfitting
O Shamir - Conference on Learning Theory, 2022 - proceedings.mlr.press
The phenomenon of benign overfitting, where a predictor perfectly fits noisy training data
while attaining low expected loss, has received much attention in recent years, but still …
while attaining low expected loss, has received much attention in recent years, but still …
From tempered to benign overfitting in relu neural networks
G Kornowski, G Yehudai… - Advances in Neural …, 2024 - proceedings.neurips.cc
Overparameterized neural networks (NNs) are observed to generalize well even when
trained to perfectly fit noisy data. This phenomenon motivated a large body of work on" …
trained to perfectly fit noisy data. This phenomenon motivated a large body of work on" …
Inducing neural collapse in deep long-tailed learning
Although deep neural networks achieve tremendous success on various classification tasks,
the generalization ability drops sheer when training datasets exhibit long-tailed distributions …
the generalization ability drops sheer when training datasets exhibit long-tailed distributions …
Explore and exploit the diverse knowledge in model zoo for domain generalization
The proliferation of pretrained models, as a result of advancements in pretraining
techniques, has led to the emergence of a vast zoo of publicly available models. Effectively …
techniques, has led to the emergence of a vast zoo of publicly available models. Effectively …
[HTML][HTML] Do we really need a new theory to understand over-parameterization?
This century saw an unprecedented increase of public and private investments in Artificial
Intelligence (AI) and especially in (Deep) Machine Learning (ML). This led to breakthroughs …
Intelligence (AI) and especially in (Deep) Machine Learning (ML). This led to breakthroughs …
Random smoothing regularization in kernel gradient descent learning
Random smoothing data augmentation is a unique form of regularization that can prevent
overfitting by introducing noise to the input data, encouraging the model to learn more …
overfitting by introducing noise to the input data, encouraging the model to learn more …
Exact count of boundary pieces of relu classifiers: Towards the proper complexity measure for classification
Classic learning theory suggests that proper regularization is the key to good generalization
and robustness. In classification, current training schemes only target the complexity of the …
and robustness. In classification, current training schemes only target the complexity of the …
Robust classification via regression for learning with noisy labels
E Englesson, H Azizpour - … Congress Center, Vienna, Austria, May 7 …, 2024 - diva-portal.org
Deep neural networks and large-scale datasets have revolutionized the field of machine
learning. However, these large networks are susceptible to overfitting to label noise …
learning. However, these large networks are susceptible to overfitting to label noise …
The INSIGHT platform: Enhancing NAD (P)-dependent specificity prediction for co-factor specificity engineering
Enzyme specificity towards cofactors like NAD (P) H is crucial for applications in
bioremediation and eco-friendly chemical synthesis. Despite their role in converting …
bioremediation and eco-friendly chemical synthesis. Despite their role in converting …