Deep image demosaicking using a cascade of convolutional residual denoising networks
F Kokkinos, S Lefkimmiatis - Proceedings of the European …, 2018 - openaccess.thecvf.com
Demosaicking and denoising are among the most crucial steps of modern digital camera
pipelines and their joint treatment is a highly ill-posed inverse problem where at-least two …
pipelines and their joint treatment is a highly ill-posed inverse problem where at-least two …
Sharpness, restart and acceleration
V Roulet, A d'Aspremont - Advances in Neural Information …, 2017 - proceedings.neurips.cc
The {\L} ojasiewicz inequality shows that H\" olderian error bounds on the minimum of
convex optimization problems hold almost generically. Here, we clarify results of\citet …
convex optimization problems hold almost generically. Here, we clarify results of\citet …
When is a convolutional filter easy to learn?
We analyze the convergence of (stochastic) gradient descent algorithm for learning a
convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does …
convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does …
Iterative joint image demosaicking and denoising using a residual denoising network
F Kokkinos, S Lefkimmiatis - IEEE Transactions on Image …, 2019 - ieeexplore.ieee.org
Modern digital cameras rely on the sequential execution of separate image processing steps
to produce realistic images. The first two steps are usually related to denoising and …
to produce realistic images. The first two steps are usually related to denoising and …
Faster first-order primal-dual methods for linear programming using restarts and sharpness
First-order primal-dual methods are appealing for their low memory overhead, fast iterations,
and effective parallelization. However, they are often slow at finding high accuracy solutions …
and effective parallelization. However, they are often slow at finding high accuracy solutions …
Statistically preconditioned accelerated gradient method for distributed optimization
We consider the setting of distributed empirical risk minimization where multiple machines
compute the gradients in parallel and a centralized server updates the model parameters. In …
compute the gradients in parallel and a centralized server updates the model parameters. In …
Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
In this paper, an inexact proximal-point penalty method is studied for constrained
optimization problems, where the objective function is non-convex, and the constraint …
optimization problems, where the objective function is non-convex, and the constraint …
Parameter-free FISTA by adaptive restart and backtracking
We consider a combined restarting and adaptive backtracking strategy for the popular fast
iterative shrinking-thresholding algorithm (FISTA) frequently employed for accelerating the …
iterative shrinking-thresholding algorithm (FISTA) frequently employed for accelerating the …
A generic online acceleration scheme for optimization algorithms via relaxation and inertia
F Iutzeler, JM Hendrickx - Optimization Methods and Software, 2019 - Taylor & Francis
We propose generic acceleration schemes for a wide class of optimization and iterative
schemes based on relaxation and inertia. In particular, we introduce methods that …
schemes based on relaxation and inertia. In particular, we introduce methods that …
Potential function-based framework for minimizing gradients in convex and min-max optimization
J Diakonikolas, P Wang - SIAM Journal on Optimization, 2022 - SIAM
Making the gradients small is a fundamental optimization problem that has eluded unifying
and simple convergence arguments in first-order optimization, so far primarily reserved for …
and simple convergence arguments in first-order optimization, so far primarily reserved for …