Eight years of AutoML: categorisation, review and trends
Abstract Knowledge extraction through machine learning techniques has been successfully
applied in a large number of application domains. However, apart from the required …
applied in a large number of application domains. However, apart from the required …
Active learning: Problem settings and recent developments
H Hino - arXiv preprint arXiv:2012.04225, 2020 - arxiv.org
In supervised learning, acquiring labeled training data for a predictive model can be very
costly, but acquiring a large amount of unlabeled data is often quite easy. Active learning is …
costly, but acquiring a large amount of unlabeled data is often quite easy. Active learning is …
A survey on multi-objective hyperparameter optimization algorithms for machine learning
A Morales-Hernández, I Van Nieuwenhuyse… - Artificial Intelligence …, 2023 - Springer
Hyperparameter optimization (HPO) is a necessary step to ensure the best possible
performance of Machine Learning (ML) algorithms. Several methods have been developed …
performance of Machine Learning (ML) algorithms. Several methods have been developed …
Federated Bayesian optimization via Thompson sampling
Bayesian optimization (BO) is a prominent approach to optimizing expensive-to-evaluate
black-box functions. The massive computational capability of edge devices such as mobile …
black-box functions. The massive computational capability of edge devices such as mobile …
Differentially private federated Bayesian optimization with distributed exploration
Bayesian optimization (BO) has recently been extended to the federated learning (FL)
setting by the federated Thompson sampling (FTS) algorithm, which has promising …
setting by the federated Thompson sampling (FTS) algorithm, which has promising …
Sample-then-optimize batch neural Thompson sampling
Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its
objective function, is popular for black-box optimization. However, due to the limitations of …
objective function, is popular for black-box optimization. However, due to the limitations of …
Quantum bayesian optimization
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent
method for optimizing complicated black-box reward functions. Various BO algorithms have …
method for optimizing complicated black-box reward functions. Various BO algorithms have …
Unifying and boosting gradient-based training-free neural architecture search
Neural architecture search (NAS) has gained immense popularity owing to its ability to
automate neural architecture design. A number of training-free metrics are recently …
automate neural architecture design. A number of training-free metrics are recently …
Efficient distributionally robust Bayesian optimization with worst-case sensitivity
In distributionally robust Bayesian optimization (DRBO), an exact computation of the worst-
case expected value requires solving an expensive convex optimization problem. We …
case expected value requires solving an expensive convex optimization problem. We …
Bayesian optimization under stochastic delayed feedback
Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization
of complex and expensive-to-compute black-box functions. The existing BO methods …
of complex and expensive-to-compute black-box functions. The existing BO methods …