Online nonconvex optimization with limited instantaneous oracle feedback
We investigate online nonconvex optimization from a local regret minimization perspective.
Previous studies along this line implicitly required the access to sufficient gradient oracles at …
Previous studies along this line implicitly required the access to sufficient gradient oracles at …
Regret minimization in stochastic non-convex learning via a proximal-gradient approach
N Hallak, P Mertikopoulos… - … Conference on Machine …, 2021 - proceedings.mlr.press
This paper develops a methodology for regret minimization with stochastic first-order oracle
feedback in online, constrained, non-smooth, non-convex problems. In this setting, the …
feedback in online, constrained, non-smooth, non-convex problems. In this setting, the …
Online bilevel optimization: Regret analysis of online alternating gradient methods
This paper introduces\textit {online bilevel optimization} in which a sequence of time-varying
bilevel problems is revealed one after the other. We extend the known regret bounds for …
bilevel problems is revealed one after the other. We extend the known regret bounds for …
Adaptive first-order methods revisited: Convex minimization without lipschitz requirements
K Antonakopoulos… - Advances in Neural …, 2021 - proceedings.neurips.cc
We propose a new family of adaptive first-order methods for a class of convex minimization
problems that may fail to be Lipschitz continuous or smooth in the standard sense …
problems that may fail to be Lipschitz continuous or smooth in the standard sense …
Non-convex bilevel optimization with time-varying objective functions
Bilevel optimization has become a powerful tool in a wide variety of machine learning
problems. However, the current nonconvex bilevel optimization considers an offline dataset …
problems. However, the current nonconvex bilevel optimization considers an offline dataset …
On the Hardness of Online Nonconvex Optimization with Single Oracle Feedback
Online nonconvex optimization has been an active area of research recently. Previous
studies either considered the global regret with full information about the objective functions …
studies either considered the global regret with full information about the objective functions …
Nested bandits
In many online decision processes, the optimizing agent is called to choose between large
numbers of alternatives with many inherent similarities; in turn, these similarities imply …
numbers of alternatives with many inherent similarities; in turn, these similarities imply …
Distributed stochastic nash equilibrium learning in locally coupled network games with unknown parameters
In stochastic Nash equilibrium problems (SNEPs), it is natural for players to be uncertain
about their complex environments and have multi-dimensional unknown parameters in their …
about their complex environments and have multi-dimensional unknown parameters in their …
Gradient and projection free distributed online min-max resource optimization
We consider distributed online min-max resource allocation with a set of parallel agents and
a parameter server. Our goal is to minimize the pointwise maximum over a set of time …
a parameter server. Our goal is to minimize the pointwise maximum over a set of time …
Unlocking TriLevel Learning with Level-Wise Zeroth Order Constraints: Distributed Algorithms and Provable Non-Asymptotic Convergence
Trilevel learning (TLL) found diverse applications in numerous machine learning
applications, ranging from robust hyperparameter optimization to domain adaptation …
applications, ranging from robust hyperparameter optimization to domain adaptation …