Tight analyses for non-smooth stochastic gradient descent
Conference on Learning Theory, 2019•proceedings.mlr.press
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not
necessarily differentiable. We prove that after $ T $ steps of stochastic gradient descent, the
error of the final iterate is $ O (\log (T)/T) $\emph {with high probability}. We also construct a
function from this class for which the error of the final iterate of\emph {deterministic} gradient
descent is $\Omega (\log (T)/T) $. This shows that the upper bound is tight and that, in this
setting, the last iterate of stochastic gradient descent has the same general error rate (with …
necessarily differentiable. We prove that after $ T $ steps of stochastic gradient descent, the
error of the final iterate is $ O (\log (T)/T) $\emph {with high probability}. We also construct a
function from this class for which the error of the final iterate of\emph {deterministic} gradient
descent is $\Omega (\log (T)/T) $. This shows that the upper bound is tight and that, in this
setting, the last iterate of stochastic gradient descent has the same general error rate (with …
Abstract
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessarily differentiable. We prove that after steps of stochastic gradient descent, the error of the final iterate is \emph {with high probability}. We also construct a function from this class for which the error of the final iterate of\emph {deterministic} gradient descent is . This shows that the upper bound is tight and that, in this setting, the last iterate of stochastic gradient descent has the same general error rate (with high probability) as deterministic gradient descent. This resolves both open questions posed by Shamir (2012). An intermediate step of our analysis proves that the suffix averaging method achieves error \emph {with high probability}, which is optimal (for any first-order optimization method). This improves results of Rakhlin et al.(2012) and Hazan and Kale (2014), both of which achieved error , but only in expectation, and achieved a high probability error bound of , which is suboptimal.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果