DC programming and DCA: thirty years of developments
HA Le Thi, T Pham Dinh - Mathematical Programming, 2018 - Springer
The year 2015 marks the 30th birthday of DC (Difference of Convex functions) programming
and DCA (DC Algorithms) which constitute the backbone of nonconvex programming and …
and DCA (DC Algorithms) which constitute the backbone of nonconvex programming and …
Minimization of for Compressed Sensing
We study minimization of the difference of \ell_1 and \ell_2 norms as a nonconvex and
Lipschitz continuous metric for solving constrained and unconstrained compressed sensing …
Lipschitz continuous metric for solving constrained and unconstrained compressed sensing …
Open issues and recent advances in DC programming and DCA
HA Le Thi, T Pham Dinh - Journal of Global Optimization, 2024 - Springer
DC (difference of convex functions) programming and DC algorithm (DCA) are powerful
tools for nonsmooth nonconvex optimization. This field was created in 1985 by Pham Dinh …
tools for nonsmooth nonconvex optimization. This field was created in 1985 by Pham Dinh …
Fast L1–L2 minimization via a proximal operator
This paper aims to develop new and fast algorithms for recovering a sparse vector from a
small number of measurements, which is a fundamental problem in the field of compressive …
small number of measurements, which is a fundamental problem in the field of compressive …
A weighted difference of anisotropic and isotropic total variation model for image processing
We propose a weighted difference of anisotropic and isotropic total variation (TV) as a
regularization for image processing tasks, based on the well-known TV model and natural …
regularization for image processing tasks, based on the well-known TV model and natural …
A survey for sparse regularization based compression methods
In recent years, deep neural networks (DNNs) have attracted extensive attention due to their
excellent performance in many fields of vision and speech recognition. With the increasing …
excellent performance in many fields of vision and speech recognition. With the increasing …
Transformed ℓ1 regularization for learning sparse deep neural networks
R Ma, J Miao, L Niu, P Zhang - Neural Networks, 2019 - Elsevier
Abstract Deep Neural Networks (DNNs) have achieved extraordinary success in numerous
areas. However, DNNs often carry a large number of weight parameters, leading to the …
areas. However, DNNs often carry a large number of weight parameters, leading to the …
A scale-invariant approach for sparse signal recovery
In this paper, we study the ratio of the L_1 and L_2 norms, denoted as L_1/L_2, to promote
sparsity. Due to the nonconvexity and nonlinearity, there has been little attention to this scale …
sparsity. Due to the nonconvexity and nonlinearity, there has been little attention to this scale …
Minimization of transformed penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
We study the minimization problem of a non-convex sparsity promoting penalty function, the
transformed l_1 l 1 (TL1), and its application in compressed sensing (CS). The TL1 penalty …
transformed l_1 l 1 (TL1), and its application in compressed sensing (CS). The TL1 penalty …
Hyperspectral Image Restoration via Global L1-2 Spatial–Spectral Total Variation Regularized Local Low-Rank Tensor Recovery
Hyperspectral images (HSIs) are usually corrupted by various noises, eg, Gaussian noise,
impulse noise, stripes, dead lines, and many others. In this article, motivated by the good …
impulse noise, stripes, dead lines, and many others. In this article, motivated by the good …