An overview of low-rank matrix recovery from incomplete observations

MA Davenport, J Romberg - IEEE Journal of Selected Topics in …, 2016 - ieeexplore.ieee.org
Low-rank matrices play a fundamental role in modeling and computational methods for
signal processing and machine learning. In many applications where low-rank matrices …

[HTML][HTML] A selective review of group selection in high-dimensional models

J Huang, P Breheny, S Ma - Statistical science: a review journal of …, 2012 - ncbi.nlm.nih.gov
Grouping structures arise naturally in many statistical modeling problems. Several methods
have been proposed for variable selection that respect grouping structure in variables …

Deja vu: Contextual sparsity for efficient llms at inference time

Z Liu, J Wang, T Dao, T Zhou, B Yuan… - International …, 2023 - proceedings.mlr.press
Large language models (LLMs) with hundreds of billions of parameters have sparked a new
wave of exciting AI applications. However, they are computationally expensive at inference …

Improving diffusion models for inverse problems using manifold constraints

H Chung, B Sim, D Ryu, JC Ye - Advances in Neural …, 2022 - proceedings.neurips.cc
Recently, diffusion models have been used to solve various inverse problems in an
unsupervised manner with appropriate modifications to the sampling process. However, the …

Sdedit: Guided image synthesis and editing with stochastic differential equations

C Meng, Y He, Y Song, J Song, J Wu, JY Zhu… - arXiv preprint arXiv …, 2021 - arxiv.org
Guided image synthesis enables everyday users to create and edit photo-realistic images
with minimum effort. The key challenge is balancing faithfulness to the user input (eg, hand …

Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions

H Chen, H Lee, J Lu - International Conference on Machine …, 2023 - proceedings.mlr.press
We give an improved theoretical analysis of score-based generative modeling. Under a
score estimate with small $ L^ 2$ error (averaged across timesteps), we provide efficient …

Hidden progress in deep learning: Sgd learns parities near the computational limit

B Barak, B Edelman, S Goel… - Advances in …, 2022 - proceedings.neurips.cc
There is mounting evidence of emergent phenomena in the capabilities of deep learning
methods as we scale up datasets, model sizes, and training times. While there are some …

Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks

J Kwon, J Kim, H Park, IK Choi - International Conference on …, 2021 - proceedings.mlr.press
Recently, learning algorithms motivated from sharpness of loss surface as an effective
measure of generalization gap have shown state-of-the-art performances. Nevertheless …

Towards efficient and scalable sharpness-aware minimization

Y Liu, S Mai, X Chen, CJ Hsieh… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of
the loss landscape and generalization, has demonstrated a significant performance boost …

Adversarial examples are not bugs, they are features

A Ilyas, S Santurkar, D Tsipras… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …