An overview of low-rank matrix recovery from incomplete observations
MA Davenport, J Romberg - IEEE Journal of Selected Topics in …, 2016 - ieeexplore.ieee.org
Low-rank matrices play a fundamental role in modeling and computational methods for
signal processing and machine learning. In many applications where low-rank matrices …
signal processing and machine learning. In many applications where low-rank matrices …
[HTML][HTML] A selective review of group selection in high-dimensional models
Grouping structures arise naturally in many statistical modeling problems. Several methods
have been proposed for variable selection that respect grouping structure in variables …
have been proposed for variable selection that respect grouping structure in variables …
Deja vu: Contextual sparsity for efficient llms at inference time
Large language models (LLMs) with hundreds of billions of parameters have sparked a new
wave of exciting AI applications. However, they are computationally expensive at inference …
wave of exciting AI applications. However, they are computationally expensive at inference …
Improving diffusion models for inverse problems using manifold constraints
Recently, diffusion models have been used to solve various inverse problems in an
unsupervised manner with appropriate modifications to the sampling process. However, the …
unsupervised manner with appropriate modifications to the sampling process. However, the …
Sdedit: Guided image synthesis and editing with stochastic differential equations
Guided image synthesis enables everyday users to create and edit photo-realistic images
with minimum effort. The key challenge is balancing faithfulness to the user input (eg, hand …
with minimum effort. The key challenge is balancing faithfulness to the user input (eg, hand …
Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions
We give an improved theoretical analysis of score-based generative modeling. Under a
score estimate with small $ L^ 2$ error (averaged across timesteps), we provide efficient …
score estimate with small $ L^ 2$ error (averaged across timesteps), we provide efficient …
Hidden progress in deep learning: Sgd learns parities near the computational limit
There is mounting evidence of emergent phenomena in the capabilities of deep learning
methods as we scale up datasets, model sizes, and training times. While there are some …
methods as we scale up datasets, model sizes, and training times. While there are some …
Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks
Recently, learning algorithms motivated from sharpness of loss surface as an effective
measure of generalization gap have shown state-of-the-art performances. Nevertheless …
measure of generalization gap have shown state-of-the-art performances. Nevertheless …
Towards efficient and scalable sharpness-aware minimization
Abstract Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of
the loss landscape and generalization, has demonstrated a significant performance boost …
the loss landscape and generalization, has demonstrated a significant performance boost …
Adversarial examples are not bugs, they are features
A Ilyas, S Santurkar, D Tsipras… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …
reasons for their existence and pervasiveness remain unclear. We demonstrate that …