Classification with deep neural networks and logistic loss

Z Zhang, L Shi, DX Zhou - Journal of Machine Learning Research, 2024 - jmlr.org
Deep neural networks (DNNs) trained with the logistic loss (also known as the cross entropy
loss) have made impressive advancements in various binary classification tasks. Despite the …

The implicit bias of benign overfitting

O Shamir - Conference on Learning Theory, 2022 - proceedings.mlr.press
The phenomenon of benign overfitting, where a predictor perfectly fits noisy training data
while attaining low expected loss, has received much attention in recent years, but still …

From tempered to benign overfitting in relu neural networks

G Kornowski, G Yehudai… - Advances in Neural …, 2024 - proceedings.neurips.cc
Overparameterized neural networks (NNs) are observed to generalize well even when
trained to perfectly fit noisy data. This phenomenon motivated a large body of work on" …

Inducing neural collapse in deep long-tailed learning

X Liu, J Zhang, T Hu, H Cao, Y Yao… - … Conference on Artificial …, 2023 - proceedings.mlr.press
Although deep neural networks achieve tremendous success on various classification tasks,
the generalization ability drops sheer when training datasets exhibit long-tailed distributions …

Explore and exploit the diverse knowledge in model zoo for domain generalization

Y Chen, T Hu, F Zhou, Z Li… - … Conference on Machine …, 2023 - proceedings.mlr.press
The proliferation of pretrained models, as a result of advancements in pretraining
techniques, has led to the emergence of a vast zoo of publicly available models. Effectively …

[HTML][HTML] Do we really need a new theory to understand over-parameterization?

L Oneto, S Ridella, D Anguita - Neurocomputing, 2023 - Elsevier
This century saw an unprecedented increase of public and private investments in Artificial
Intelligence (AI) and especially in (Deep) Machine Learning (ML). This led to breakthroughs …

Random smoothing regularization in kernel gradient descent learning

L Ding, T Hu, J Jiang, D Li, W Wang, Y Yao - arXiv preprint arXiv …, 2023 - arxiv.org
Random smoothing data augmentation is a unique form of regularization that can prevent
overfitting by introducing noise to the input data, encouraging the model to learn more …

Exact count of boundary pieces of relu classifiers: Towards the proper complexity measure for classification

P Piwek, A Klukowski, T Hu - Uncertainty in Artificial …, 2023 - proceedings.mlr.press
Classic learning theory suggests that proper regularization is the key to good generalization
and robustness. In classification, current training schemes only target the complexity of the …

Robust classification via regression for learning with noisy labels

E Englesson, H Azizpour - … Congress Center, Vienna, Austria, May 7 …, 2024 - diva-portal.org
Deep neural networks and large-scale datasets have revolutionized the field of machine
learning. However, these large networks are susceptible to overfitting to label noise …

The INSIGHT platform: Enhancing NAD (P)-dependent specificity prediction for co-factor specificity engineering

Y Ye, H Jiang, R Xu, S Wang, L Zheng, J Guo - International Journal of …, 2024 - Elsevier
Enzyme specificity towards cofactors like NAD (P) H is crucial for applications in
bioremediation and eco-friendly chemical synthesis. Despite their role in converting …