Global optimality in bivariate gradient-based DAG learning
Recently, a new class of non-convex optimization problems motivated by the statistical
problem of learning an acyclic directed graphical model from data has attracted significant …
problem of learning an acyclic directed graphical model from data has attracted significant …
One-line-of-code data mollification improves optimization of likelihood-based generative models
Abstract Generative Models (GMs) have attracted considerable attention due to their
tremendous success in various domains, such as computer vision where they are capable to …
tremendous success in various domains, such as computer vision where they are capable to …
Continuation path learning for homotopy optimization
Homotopy optimization is a traditional method to deal with a complicated optimization
problem by solving a sequence of easy-to-hard surrogate subproblems. However, this …
problem by solving a sequence of easy-to-hard surrogate subproblems. However, this …
Homotopy-based training of NeuralODEs for accurate dynamics discovery
Abstract Neural Ordinary Differential Equations (NeuralODEs) present an attractive way to
extract dynamical laws from time series data, as they bridge neural networks with the …
extract dynamical laws from time series data, as they bridge neural networks with the …
Using Stochastic Gradient Descent to Smooth Nonconvex Functions: Analysis of Implicit Graduated Optimization with Optimal Noise Scheduling
N Sato, H Iiduka - arXiv preprint arXiv:2311.08745, 2023 - arxiv.org
The graduated optimization approach is a heuristic method for finding globally optimal
solutions for nonconvex functions and has been theoretically analyzed in several studies …
solutions for nonconvex functions and has been theoretically analyzed in several studies …
Gaussian smoothing gradient descent for minimizing high-dimensional non-convex functions
A Starnes, A Dereventsov, C Webster - arXiv preprint arXiv:2311.00521, 2023 - arxiv.org
This work analyzes the convergence of a class of smoothing-based gradient descent
methods when applied to high-dimensional non-convex optimization problems. In particular …
methods when applied to high-dimensional non-convex optimization problems. In particular …
Prediction-Correction Algorithm for Time-Varying Smooth Non-Convex Optimization
Time-varying optimization problems are prevalent in various engineering fields, and the
ability to solve them accurately in real-time is becoming increasingly important. The …
ability to solve them accurately in real-time is becoming increasingly important. The …
Deep learning with Gaussian continuation
AF Ilersich, PB Nair - Foundations of Data Science, 2024 - aimsciences.org
In this paper, we develop a Gaussian continuation framework for deep learning, which is an
optimization strategy that involves smoothing the loss function by convolving it with a …
optimization strategy that involves smoothing the loss function by convolving it with a …
Anisotropic Gaussian Smoothing for Gradient-based Optimization
This article introduces a novel family of optimization algorithms-Anisotropic Gaussian
Smoothing Gradient Descent (AGS-GD), AGS-Stochastic Gradient Descent (AGS-SGD), and …
Smoothing Gradient Descent (AGS-GD), AGS-Stochastic Gradient Descent (AGS-SGD), and …
Global Optimization with A Power-Transformed Objective and Gaussian Smoothing
C Xu - arXiv preprint arXiv:2412.05204, 2024 - arxiv.org
We propose a novel method that solves global optimization problems in two steps:(1)
perform a (exponential) power-$ N $ transformation to the not-necessarily differentiable …
perform a (exponential) power-$ N $ transformation to the not-necessarily differentiable …