Amortized bayesian optimization over discrete spaces

K Swersky, Y Rubanova, D Dohan… - … on Uncertainty in …, 2020 - proceedings.mlr.press
Conference on Uncertainty in Artificial Intelligence, 2020proceedings.mlr.press
Bayesian optimization is a principled approach for globally optimizing expensive, black-box
functions by using a surrogate model of the objective. However, each step of Bayesian
optimization involves solving an inner optimization problem, in which we maximize an
acquisition function derived from the surrogate model to decide where to query next. This
inner problem can be challenging to solve, particularly in discrete spaces, such as protein
sequences or molecular graphs, where gradient-based optimization cannot be used. Our …
Abstract
Bayesian optimization is a principled approach for globally optimizing expensive, black-box functions by using a surrogate model of the objective. However, each step of Bayesian optimization involves solving an inner optimization problem, in which we maximize an acquisition function derived from the surrogate model to decide where to query next. This inner problem can be challenging to solve, particularly in discrete spaces, such as protein sequences or molecular graphs, where gradient-based optimization cannot be used. Our key insight is that we can train a generative model to generate candidates that maximize the acquisition function. This is faster than standard model-free local search methods, since we can amortize the cost of learning the model across multiple rounds of Bayesian optimization. We therefore call this Amortized Bayesian Optimization. On several challenging discrete design problems, we show this method generally outperforms other methods at optimizing the inner acquisition function, resulting in more efficient optimization of the outer black-box objective.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果