One fits all: Power general time series analysis by pretrained lm
Although we have witnessed great success of pre-trained models in natural language
processing (NLP) and computer vision (CV), limited progress has been made for general …
processing (NLP) and computer vision (CV), limited progress has been made for general …
A unified theory of decentralized sgd with changing topology and local updates
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly
because of their cheap per iteration cost, data locality, and their communication-efficiency. In …
because of their cheap per iteration cost, data locality, and their communication-efficiency. In …
Sparsified SGD with memory
SU Stich, JB Cordonnier… - Advances in neural …, 2018 - proceedings.neurips.cc
Huge scale machine learning problems are nowadays tackled by distributed optimization
algorithms, ie algorithms that leverage the compute power of many devices for training. The …
algorithms, ie algorithms that leverage the compute power of many devices for training. The …
Local SGD converges fast and communicates little
SU Stich - arXiv preprint arXiv:1805.09767, 2018 - arxiv.org
Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed
training. The scheme can reach a linear speedup with respect to the number of workers, but …
training. The scheme can reach a linear speedup with respect to the number of workers, but …
A modern introduction to online learning
F Orabona - arXiv preprint arXiv:1912.13213, 2019 - arxiv.org
In this monograph, I introduce the basic concepts of Online Learning through a modern view
of Online Convex Optimization. Here, online learning refers to the framework of regret …
of Online Convex Optimization. Here, online learning refers to the framework of regret …
Don't use large mini-batches, use local sgd
Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of
deep neural networks. Drastic increases in the mini-batch sizes have lead to key efficiency …
deep neural networks. Drastic increases in the mini-batch sizes have lead to key efficiency …
Smart “predict, then optimize”
AN Elmachtoub, P Grigas - Management Science, 2022 - pubsonline.informs.org
Many real-world analytics problems involve two significant challenges: prediction and
optimization. Because of the typically complex nature of each challenge, the standard …
optimization. Because of the typically complex nature of each challenge, the standard …
A finite time analysis of temporal difference learning with linear function approximation
Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value
function corresponding to a given policy in a Markov decision process. Although TD is one of …
function corresponding to a given policy in a Markov decision process. Although TD is one of …
Convex optimization: Algorithms and complexity
S Bubeck - Foundations and Trends® in Machine Learning, 2015 - nowpublishers.com
This monograph presents the main complexity theorems in convex optimization and their
corresponding algorithms. Starting from the fundamental theory of black-box optimization …
corresponding algorithms. Starting from the fundamental theory of black-box optimization …
FedSplit: An algorithmic framework for fast federated optimization
R Pathak, MJ Wainwright - Advances in neural information …, 2020 - proceedings.neurips.cc
Motivated by federated learning, we consider the hub-and-spoke model of distributed
optimization in which a central authority coordinates the computation of a solution among …
optimization in which a central authority coordinates the computation of a solution among …