A simple way to make neural networks robust against diverse image corruptions

E Rusak, L Schott, RS Zimmermann, J Bitterwolf… - Computer Vision–ECCV …, 2020 - Springer
… adversarial noise generator towards locally correlated noise … robustness of a model by
sampling a Gaussian noise vector \(… \(\varvec{\delta }\) starting from the original image \(\textit{\…

Cold diffusion: Inverting arbitrary image transforms without noise

A Bansal, E Borgnia, HM Chu, J Li… - Advances in …, 2024 - proceedings.neurips.cc
models, which relies on noise in either gradient Langevin dynamics or variational inference,
and paves the way for generalized diffusion models that … We then fit a simple GMM with one …

How to train your energy-based models

Y Song, DP Kingma - arXiv preprint arXiv:2101.03288, 2021 - arxiv.org
… One way to attenuate the inconsistency of DSM is to choose … This noise distribution is usually
simple and has a tractable … can define a mixture of noise and the model distribution: pn,θ(…

Improved techniques for training score-based generative models

Y Song, S Ermon - Advances in neural information …, 2020 - proceedings.neurips.cc
… a simplified mixture model, we provide a method to analytically compute an effective set of
Gaussian noise … visual quality in the expected way. In particular, they can be sensitive to slight …

Beyond synthetic noise: Deep learning on controlled noisy labels

L Jiang, D Huang, M Liu… - … conference on machine …, 2020 - proceedings.mlr.press
… Second, this paper introduces a simple yet highly effective … We show that an alternative
way to construct the dataset by … • A simple way to deal with noisy labels is to fine-tune a model

[PDF][PDF] Early-learning regularization prevents memorization of noisy labels

S Liu, J Niles-Weed, N Razavian… - Advances in neural …, 2020 - proceedings.neurips.cc
… labels, we take a different route and instead capitalize on early … We exhibit a simple linear
model with noisy labels which … the true labels, even on noisy examples, and the memorization …

Progressive distillation for fast sampling of diffusion models

T Salimans, J Ho - arXiv preprint arXiv:2202.00512, 2022 - arxiv.org
way down to a single sampling step, the input to the model is only pure noise ϵ, which
corresponds to a signal-to-noise … ±0.1 due to the noise inherent in training our models. Taking the …

Why are adaptive methods good for attention models?

J Zhang, SP Karimireddy, A Veit… - Advances in …, 2020 - proceedings.neurips.cc
… , we start our discussion with the study of noise distributions of … A simple strategy to
circumvent this issue is to use a biased … Hence, our work provides one way to bridge the theory-…

Flow matching for generative modeling

Y Lipman, RTQ Chen, H Ben-Hamu, M Nickel… - arXiv preprint arXiv …, 2022 - arxiv.org
… We find that conditional OT paths are simpler than diffusion paths, forming straight line … the
OT path model starts generating images sooner than the diffusion path models, where noise

Simple baselines for image restoration

L Chen, X Chu, X Zhang, J Sun - European conference on computer vision, 2022 - Springer
… complexity is not the only way to improve performance: SOTA … with the model size around 16
GMACs following HINet Simple [… Qualitatively compare the noise reduction effects of PMRID […