Sample-then-optimize batch neural Thompson sampling

Z Dai, Y Shu, BKH Low, P Jaillet - Advances in Neural …, 2022 - proceedings.neurips.cc
Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its
objective function, is popular for black-box optimization. However, due to the limitations of …

Quantum bayesian optimization

Z Dai, GKR Lau, A Verma, Y Shu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Kernelized bandits, also known as Bayesian optimization (BO), has been a prevalent
method for optimizing complicated black-box reward functions. Various BO algorithms have …

[PDF][PDF] Use your instinct: Instruction optimization using neural bandits coupled with transformers

X Lin, Z Wu, Z Dai, W Hu, Y Shu, SK Ng, P Jaillet… - arXiv preprint arXiv …, 2023 - mit.edu
Large language models (LLMs) have shown remarkable instruction-following capabilities
and achieved impressive performances in various applications. However, the performances …

Batch Bayesian optimization for replicable experimental design

Z Dai, QP Nguyen, S Tay, D Urano… - Advances in …, 2024 - proceedings.neurips.cc
Many real-world experimental design problems (a) evaluate multiple experimental
conditions in parallel and (b) replicate each condition multiple times due to large and …

Training-free neural active learning with initialization-robustness guarantees

A Hemachandra, Z Dai, J Singh… - International …, 2023 - proceedings.mlr.press
Existing neural active learning algorithms have aimed to optimize the predictive
performance of neural networks (NNs) by selecting data for labelling. However, other than a …

PINNACLE: PINN Adaptive ColLocation and Experimental points selection

GKR Lau, A Hemachandra, SK Ng… - arXiv preprint arXiv …, 2024 - arxiv.org
Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints,
train with a composite loss function that contains multiple training point types: different types …

Federated zeroth-order optimization using trajectory-informed surrogate gradients

Y Shu, X Lin, Z Dai, BKH Low - arXiv preprint arXiv:2308.04077, 2023 - arxiv.org
Federated optimization, an emerging paradigm which finds wide real-world applications
such as federated learning, enables multiple clients (eg, edge devices) to collaboratively …

Fedhql: Federated heterogeneous q-learning

FX Fan, Y Ma, Z Dai, C Tan, BKH Low… - arXiv preprint arXiv …, 2023 - arxiv.org
Federated Reinforcement Learning (FedRL) encourages distributed agents to learn
collectively from each other's experience to improve their performance without exchanging …

Harnessing the Power of Federated Learning in Federated Contextual Bandits

C Shi, R Zhou, K Yang, C Shen - arXiv preprint arXiv:2312.16341, 2023 - arxiv.org
Federated learning (FL) has demonstrated great potential in revolutionizing distributed
machine learning, and tremendous efforts have been made to extend it beyond the original …

No-regret Sample-efficient Bayesian Optimization for Finding Nash Equilibria with Unknown Utilities

SS Tay, QP Nguyen, CS Foo… - … Conference on Artificial …, 2023 - proceedings.mlr.press
The Nash equilibrium (NE) is a classic solution concept for normal-form games that is stable
under potential unilateral deviations by self-interested agents. Bayesian optimization (BO) …