Sample-then-optimize batch neural Thompson sampling
Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its
objective function, is popular for black-box optimization. However, due to the limitations of …
objective function, is popular for black-box optimization. However, due to the limitations of …
Quantum bayesian optimization
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent
method for optimizing complicated black-box reward functions. Various BO algorithms have …
method for optimizing complicated black-box reward functions. Various BO algorithms have …
[PDF][PDF] Use your instinct: Instruction optimization using neural bandits coupled with transformers
Large language models (LLMs) have shown remarkable instruction-following capabilities
and achieved impressive performances in various applications. However, the performances …
and achieved impressive performances in various applications. However, the performances …
Batch Bayesian optimization for replicable experimental design
Many real-world experimental design problems (a) evaluate multiple experimental
conditions in parallel and (b) replicate each condition multiple times due to large and …
conditions in parallel and (b) replicate each condition multiple times due to large and …
Training-free neural active learning with initialization-robustness guarantees
Existing neural active learning algorithms have aimed to optimize the predictive
performance of neural networks (NNs) by selecting data for labelling. However, other than a …
performance of neural networks (NNs) by selecting data for labelling. However, other than a …
PINNACLE: PINN Adaptive ColLocation and Experimental points selection
Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints,
train with a composite loss function that contains multiple training point types: different types …
train with a composite loss function that contains multiple training point types: different types …
Federated zeroth-order optimization using trajectory-informed surrogate gradients
Federated optimization, an emerging paradigm which finds wide real-world applications
such as federated learning, enables multiple clients (eg, edge devices) to collaboratively …
such as federated learning, enables multiple clients (eg, edge devices) to collaboratively …
Fedhql: Federated heterogeneous q-learning
Federated Reinforcement Learning (FedRL) encourages distributed agents to learn
collectively from each other's experience to improve their performance without exchanging …
collectively from each other's experience to improve their performance without exchanging …
Harnessing the Power of Federated Learning in Federated Contextual Bandits
Federated learning (FL) has demonstrated great potential in revolutionizing distributed
machine learning, and tremendous efforts have been made to extend it beyond the original …
machine learning, and tremendous efforts have been made to extend it beyond the original …
No-regret Sample-efficient Bayesian Optimization for Finding Nash Equilibria with Unknown Utilities
The Nash equilibrium (NE) is a classic solution concept for normal-form games that is stable
under potential unilateral deviations by self-interested agents. Bayesian optimization (BO) …
under potential unilateral deviations by self-interested agents. Bayesian optimization (BO) …