[HTML][HTML] Deep learning in electron microscopy

JM Ede - Machine Learning: Science and Technology, 2021 - iopscience.iop.org
Deep learning is transforming most areas of science and technology, including electron
microscopy. This review paper offers a practical perspective aimed at developers with …

Learning from history for byzantine robust optimization

SP Karimireddy, L He, M Jaggi - International Conference on …, 2021 - proceedings.mlr.press
Byzantine robustness has received significant attention recently given its importance for
distributed and federated learning. In spite of this, we identify severe flaws in existing …

Federated Learning and Meta Learning: Approaches, Applications, and Directions

X Liu, Y Deng, A Nallanathan… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
Over the past few years, significant advancements have been made in the field of machine
learning (ML) to address resource management, interference management, autonomy, and …

Acceleration methods

A d'Aspremont, D Scieur, A Taylor - Foundations and Trends® …, 2021 - nowpublishers.com
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …

Why are adaptive methods good for attention models?

J Zhang, SP Karimireddy, A Veit… - Advances in …, 2020 - proceedings.neurips.cc
While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning,
adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across …

Robustness to unbounded smoothness of generalized signsgd

M Crawshaw, M Liu, F Orabona… - Advances in neural …, 2022 - proceedings.neurips.cc
Traditional analyses in non-convex optimization typically rely on the smoothness
assumption, namely requiring the gradients to be Lipschitz. However, recent evidence …

High-probability bounds for stochastic optimization and variational inequalities: the case of unbounded variance

A Sadiev, M Danilova, E Gorbunov… - International …, 2023 - proceedings.mlr.press
During the recent years the interest of optimization and machine learning communities in
high-probability convergence of stochastic optimization methods has been growing. One of …

Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

A Koloskova, H Hendrikx… - … Conference on Machine …, 2023 - proceedings.mlr.press
Gradient clipping is a popular modification to standard (stochastic) gradient descent, at
every iteration limiting the gradient norm to a certain value $ c> 0$. It is widely used for …

Stochastic training is not necessary for generalization

J Geiping, M Goldblum, PE Pope, M Moeller… - arXiv preprint arXiv …, 2021 - arxiv.org
It is widely believed that the implicit regularization of SGD is fundamental to the impressive
generalization behavior we observe in neural networks. In this work, we demonstrate that …

High probability convergence of stochastic gradient methods

Z Liu, TD Nguyen, TH Nguyen… - … on Machine Learning, 2023 - proceedings.mlr.press
In this work, we describe a generic approach to show convergence with high probability for
both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous …