Improved contrastive divergence training of energy based models

Y Du, S Li, J Tenenbaum, I Mordatch - arXiv preprint arXiv:2012.01316, 2020 - arxiv.org
Contrastive divergence is a popular method of training energy-based models, but is known
to have difficulties with training stability. We propose an adaptation to improve contrastive …

Vaebm: A symbiosis between variational autoencoders and energy-based models

Z Xiao, K Kreis, J Kautz, A Vahdat - arXiv preprint arXiv:2010.00654, 2020 - arxiv.org
Energy-based models (EBMs) have recently been successful in representing complex
distributions of small images. However, sampling from them requires expensive Markov …

Learning latent space energy-based prior model

B Pang, T Han, E Nijkamp, SC Zhu… - Advances in Neural …, 2020 - proceedings.neurips.cc
We propose an energy-based model (EBM) in the latent space of a generator model, so that
the EBM serves as a prior model that stands on the top-down network of the generator …

On the anatomy of mcmc-based maximum likelihood learning of energy-based models

E Nijkamp, M Hill, T Han, SC Zhu, YN Wu - Proceedings of the AAAI …, 2020 - ojs.aaai.org
This study investigates the effects of Markov chain Monte Carlo (MCMC) sampling in
unsupervised Maximum Likelihood (ML) learning. Our attention is restricted to the family of …

Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling

T Che, R Zhang, J Sohl-Dickstein… - Advances in …, 2020 - proceedings.neurips.cc
We show that the sum of the implicit generator log-density $\log p_g $ of a GAN with the logit
score of the discriminator defines an energy function which yields the true data density when …

Abstract spatial-temporal reasoning via probabilistic abduction and execution

C Zhang, B Jia, SC Zhu, Y Zhu - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI) due to its
demanding but unique nature: a theoretic requirement on representing and reasoning …

A contrastive learning approach for training variational autoencoder priors

J Aneja, A Schwing, J Kautz… - Advances in neural …, 2021 - proceedings.neurips.cc
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative
models with applications in many domains. However, they struggle to generate high-quality …

Learning layout and style reconfigurable gans for controllable image synthesis

W Sun, T Wu - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
With the remarkable recent progress on learning deep generative models, it becomes
increasingly interesting to develop models for controllable image synthesis from …

Generative pointnet: Deep energy-based learning on unordered point sets for 3d generation, reconstruction and classification

J Xie, Y Xu, Z Zheng, SC Zhu… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We propose a generative model of unordered point sets, such as point clouds, in the forms
of an energy-based model, where the energy function is parameterized by an input …

A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energy-based model

J Xie, Y Zhu, J Li, P Li - arXiv preprint arXiv:2205.06924, 2022 - arxiv.org
This paper studies the cooperative learning of two generative flow models, in which the two
models are iteratively updated based on the jointly synthesized examples. The first flow …