[HTML][HTML] Deep learning in electron microscopy
JM Ede - Machine Learning: Science and Technology, 2021 - iopscience.iop.org
Deep learning is transforming most areas of science and technology, including electron
microscopy. This review paper offers a practical perspective aimed at developers with …
microscopy. This review paper offers a practical perspective aimed at developers with …
Learning from history for byzantine robust optimization
Byzantine robustness has received significant attention recently given its importance for
distributed and federated learning. In spite of this, we identify severe flaws in existing …
distributed and federated learning. In spite of this, we identify severe flaws in existing …
Federated Learning and Meta Learning: Approaches, Applications, and Directions
Over the past few years, significant advancements have been made in the field of machine
learning (ML) to address resource management, interference management, autonomy, and …
learning (ML) to address resource management, interference management, autonomy, and …
Acceleration methods
This monograph covers some recent advances in a range of acceleration techniques
frequently used in convex optimization. We first use quadratic optimization problems to …
frequently used in convex optimization. We first use quadratic optimization problems to …
Why are adaptive methods good for attention models?
While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning,
adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across …
adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across …
Robustness to unbounded smoothness of generalized signsgd
Traditional analyses in non-convex optimization typically rely on the smoothness
assumption, namely requiring the gradients to be Lipschitz. However, recent evidence …
assumption, namely requiring the gradients to be Lipschitz. However, recent evidence …
High-probability bounds for stochastic optimization and variational inequalities: the case of unbounded variance
During the recent years the interest of optimization and machine learning communities in
high-probability convergence of stochastic optimization methods has been growing. One of …
high-probability convergence of stochastic optimization methods has been growing. One of …
Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
A Koloskova, H Hendrikx… - … Conference on Machine …, 2023 - proceedings.mlr.press
Gradient clipping is a popular modification to standard (stochastic) gradient descent, at
every iteration limiting the gradient norm to a certain value $ c> 0$. It is widely used for …
every iteration limiting the gradient norm to a certain value $ c> 0$. It is widely used for …
Stochastic training is not necessary for generalization
It is widely believed that the implicit regularization of SGD is fundamental to the impressive
generalization behavior we observe in neural networks. In this work, we demonstrate that …
generalization behavior we observe in neural networks. In this work, we demonstrate that …
High probability convergence of stochastic gradient methods
In this work, we describe a generic approach to show convergence with high probability for
both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous …
both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous …