Deep image demosaicking using a cascade of convolutional residual denoising networks

F Kokkinos, S Lefkimmiatis - Proceedings of the European …, 2018 - openaccess.thecvf.com
Demosaicking and denoising are among the most crucial steps of modern digital camera
pipelines and their joint treatment is a highly ill-posed inverse problem where at-least two …

Sharpness, restart and acceleration

V Roulet, A d'Aspremont - Advances in Neural Information …, 2017 - proceedings.neurips.cc
The {\L} ojasiewicz inequality shows that H\" olderian error bounds on the minimum of
convex optimization problems hold almost generically. Here, we clarify results of\citet …

When is a convolutional filter easy to learn?

SS Du, JD Lee, Y Tian - arXiv preprint arXiv:1709.06129, 2017 - arxiv.org
We analyze the convergence of (stochastic) gradient descent algorithm for learning a
convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does …

Iterative joint image demosaicking and denoising using a residual denoising network

F Kokkinos, S Lefkimmiatis - IEEE Transactions on Image …, 2019 - ieeexplore.ieee.org
Modern digital cameras rely on the sequential execution of separate image processing steps
to produce realistic images. The first two steps are usually related to denoising and …

Faster first-order primal-dual methods for linear programming using restarts and sharpness

D Applegate, O Hinder, H Lu, M Lubin - Mathematical Programming, 2023 - Springer
First-order primal-dual methods are appealing for their low memory overhead, fast iterations,
and effective parallelization. However, they are often slow at finding high accuracy solutions …

Statistically preconditioned accelerated gradient method for distributed optimization

H Hendrikx, L Xiao, S Bubeck, F Bach… - … on machine learning, 2020 - proceedings.mlr.press
We consider the setting of distributed empirical risk minimization where multiple machines
compute the gradients in parallel and a centralized server updates the model parameters. In …

Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization

Q Lin, R Ma, Y Xu - Computational optimization and applications, 2022 - Springer
In this paper, an inexact proximal-point penalty method is studied for constrained
optimization problems, where the objective function is non-convex, and the constraint …

Parameter-free FISTA by adaptive restart and backtracking

JF Aujol, L Calatroni, C Dossal, H Labarrière… - SIAM Journal on …, 2024 - SIAM
We consider a combined restarting and adaptive backtracking strategy for the popular fast
iterative shrinking-thresholding algorithm (FISTA) frequently employed for accelerating the …

A generic online acceleration scheme for optimization algorithms via relaxation and inertia

F Iutzeler, JM Hendrickx - Optimization Methods and Software, 2019 - Taylor & Francis
We propose generic acceleration schemes for a wide class of optimization and iterative
schemes based on relaxation and inertia. In particular, we introduce methods that …

Potential function-based framework for minimizing gradients in convex and min-max optimization

J Diakonikolas, P Wang - SIAM Journal on Optimization, 2022 - SIAM
Making the gradients small is a fundamental optimization problem that has eluded unifying
and simple convergence arguments in first-order optimization, so far primarily reserved for …