Global optimality in bivariate gradient-based DAG learning

C Deng, K Bello, P Ravikumar… - Advances in Neural …, 2023 - proceedings.neurips.cc
Recently, a new class of non-convex optimization problems motivated by the statistical
problem of learning an acyclic directed graphical model from data has attracted significant …

One-line-of-code data mollification improves optimization of likelihood-based generative models

BH Tran, G Franzese, P Michiardi… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Generative Models (GMs) have attracted considerable attention due to their
tremendous success in various domains, such as computer vision where they are capable to …

Continuation path learning for homotopy optimization

X Lin, Z Yang, X Zhang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Homotopy optimization is a traditional method to deal with a complicated optimization
problem by solving a sequence of easy-to-hard surrogate subproblems. However, this …

Homotopy-based training of NeuralODEs for accurate dynamics discovery

JH Ko, H Koh, N Park, W Jhe - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Neural Ordinary Differential Equations (NeuralODEs) present an attractive way to
extract dynamical laws from time series data, as they bridge neural networks with the …

Using Stochastic Gradient Descent to Smooth Nonconvex Functions: Analysis of Implicit Graduated Optimization with Optimal Noise Scheduling

N Sato, H Iiduka - arXiv preprint arXiv:2311.08745, 2023 - arxiv.org
The graduated optimization approach is a heuristic method for finding globally optimal
solutions for nonconvex functions and has been theoretically analyzed in several studies …

Gaussian smoothing gradient descent for minimizing high-dimensional non-convex functions

A Starnes, A Dereventsov, C Webster - arXiv preprint arXiv:2311.00521, 2023 - arxiv.org
This work analyzes the convergence of a class of smoothing-based gradient descent
methods when applied to high-dimensional non-convex optimization problems. In particular …

Prediction-Correction Algorithm for Time-Varying Smooth Non-Convex Optimization

H Iwakiri, T Kamijima, S Ito, A Takeda - arXiv preprint arXiv:2402.06181, 2024 - arxiv.org
Time-varying optimization problems are prevalent in various engineering fields, and the
ability to solve them accurately in real-time is becoming increasingly important. The …

Deep learning with Gaussian continuation

AF Ilersich, PB Nair - Foundations of Data Science, 2024 - aimsciences.org
In this paper, we develop a Gaussian continuation framework for deep learning, which is an
optimization strategy that involves smoothing the loss function by convolving it with a …

Anisotropic Gaussian Smoothing for Gradient-based Optimization

A Starnes, G Zhang, V Reshniak, C Webster - arXiv preprint arXiv …, 2024 - arxiv.org
This article introduces a novel family of optimization algorithms-Anisotropic Gaussian
Smoothing Gradient Descent (AGS-GD), AGS-Stochastic Gradient Descent (AGS-SGD), and …

Global Optimization with A Power-Transformed Objective and Gaussian Smoothing

C Xu - arXiv preprint arXiv:2412.05204, 2024 - arxiv.org
We propose a novel method that solves global optimization problems in two steps:(1)
perform a (exponential) power-$ N $ transformation to the not-necessarily differentiable …