Eight years of AutoML: categorisation, review and trends

R Barbudo, S Ventura, JR Romero - Knowledge and Information Systems, 2023 - Springer
Abstract Knowledge extraction through machine learning techniques has been successfully
applied in a large number of application domains. However, apart from the required …

Active learning: Problem settings and recent developments

H Hino - arXiv preprint arXiv:2012.04225, 2020 - arxiv.org
In supervised learning, acquiring labeled training data for a predictive model can be very
costly, but acquiring a large amount of unlabeled data is often quite easy. Active learning is …

A survey on multi-objective hyperparameter optimization algorithms for machine learning

A Morales-Hernández, I Van Nieuwenhuyse… - Artificial Intelligence …, 2023 - Springer
Hyperparameter optimization (HPO) is a necessary step to ensure the best possible
performance of Machine Learning (ML) algorithms. Several methods have been developed …

Federated Bayesian optimization via Thompson sampling

Z Dai, BKH Low, P Jaillet - Advances in Neural Information …, 2020 - proceedings.neurips.cc
Bayesian optimization (BO) is a prominent approach to optimizing expensive-to-evaluate
black-box functions. The massive computational capability of edge devices such as mobile …

Differentially private federated Bayesian optimization with distributed exploration

Z Dai, BKH Low, P Jaillet - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Bayesian optimization (BO) has recently been extended to the federated learning (FL)
setting by the federated Thompson sampling (FTS) algorithm, which has promising …

Sample-then-optimize batch neural Thompson sampling

Z Dai, Y Shu, BKH Low, P Jaillet - Advances in Neural …, 2022 - proceedings.neurips.cc
Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its
objective function, is popular for black-box optimization. However, due to the limitations of …

Quantum bayesian optimization

Z Dai, GKR Lau, A Verma, Y Shu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent
method for optimizing complicated black-box reward functions. Various BO algorithms have …

Unifying and boosting gradient-based training-free neural architecture search

Y Shu, Z Dai, Z Wu, BKH Low - Advances in Neural …, 2022 - proceedings.neurips.cc
Neural architecture search (NAS) has gained immense popularity owing to its ability to
automate neural architecture design. A number of training-free metrics are recently …

Efficient distributionally robust Bayesian optimization with worst-case sensitivity

SS Tay, CS Foo, U Daisuke… - … on Machine Learning, 2022 - proceedings.mlr.press
In distributionally robust Bayesian optimization (DRBO), an exact computation of the worst-
case expected value requires solving an expensive convex optimization problem. We …

Bayesian optimization under stochastic delayed feedback

A Verma, Z Dai, BKH Low - International Conference on …, 2022 - proceedings.mlr.press
Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization
of complex and expensive-to-compute black-box functions. The existing BO methods …