AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models
Low-rank adaptation (LoRA) is a fine-tuning technique that can be applied to conditional
generative diffusion models. LoRA utilizes a small number of context examples to adapt the …
generative diffusion models. LoRA utilizes a small number of context examples to adapt the …
Amortizing intractable inference in diffusion models for vision, language, and control
Diffusion models have emerged as effective distribution estimators in vision, language, and
reinforcement learning, but their use as priors in downstream tasks poses an intractable …
reinforcement learning, but their use as priors in downstream tasks poses an intractable …
Flow of Reasoning: Efficient Training of LLM Policy with Divergent Thinking
Divergent thinking, the cognitive process of generating diverse solutions, is a hallmark of
human creativity and problem-solving. For machines, sampling diverse solution trajectories …
human creativity and problem-solving. For machines, sampling diverse solution trajectories …
Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction
Generative modeling of discrete data underlies important applications spanning text-based
agents like ChatGPT to the design of the very building blocks of life in protein sequences …
agents like ChatGPT to the design of the very building blocks of life in protein sequences …
Understanding and mitigating difficulties in posterior predictive evaluation
A Agrawal, J Domke - arXiv preprint arXiv:2405.19747, 2024 - arxiv.org
Predictive posterior densities (PPDs) are of interest in approximate Bayesian inference.
Typically, these are estimated by simple Monte Carlo (MC) averages using samples from the …
Typically, these are estimated by simple Monte Carlo (MC) averages using samples from the …
Sequential Controlled Langevin Diffusions
An effective approach for sampling from unnormalized densities is based on the idea of
gradually transporting samples from an easy prior to the complicated target distribution. Two …
gradually transporting samples from an easy prior to the complicated target distribution. Two …
Pessimistic Backward Policy for GFlowNets
This paper studies Generative Flow Networks (GFlowNets), which learn to sample objects
proportionally to a given reward function through the trajectory of state transitions. In this …
proportionally to a given reward function through the trajectory of state transitions. In this …
Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling
Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in
sampling from intractable probability distributions. However, current studies lack a unified …
sampling from intractable probability distributions. However, current studies lack a unified …
Streaming Bayes GFlowNets
Bayes' rule naturally allows for inference refinement in a streaming fashion, without the need
to recompute posteriors from scratch whenever new data arrives. In principle, Bayesian …
to recompute posteriors from scratch whenever new data arrives. In principle, Bayesian …
Iterated Energy-based Flow Matching for Sampling from Boltzmann Densities
In this work, we consider the problem of training a generator from evaluations of energy
functions or unnormalized densities. This is a fundamental problem in probabilistic …
functions or unnormalized densities. This is a fundamental problem in probabilistic …