Tuning hyperparameters without grad students: Scalable and robust bayesian optimisation with dragonfly

K Kandasamy, KR Vysyaraju, W Neiswanger… - Journal of Machine …, 2020 - jmlr.org
Bayesian Optimisation (BO) refers to a suite of techniques for global optimisation of
expensive black box functions, which use introspective Bayesian models of the function to …

Data-driven polynomial chaos expansion for machine learning regression

E Torre, S Marelli, P Embrechts, B Sudret - Journal of Computational …, 2019 - Elsevier
We present a regression technique for data-driven problems based on polynomial chaos
expansion (PCE). PCE is a popular technique in the field of uncertainty quantification (UQ) …

Gaussian process bandit optimisation with multi-fidelity evaluations

K Kandasamy, G Dasarathy, JB Oliva… - Advances in neural …, 2016 - proceedings.neurips.cc
In many scientific and engineering applications, we are tasked with the optimisation of an
expensive to evaluate black box function $\func $. Traditional methods for this problem …

Sparse interaction additive networks via feature interaction detection and sparse selection

J Enouen, Y Liu - Advances in Neural Information …, 2022 - proceedings.neurips.cc
There is currently a large gap in performance between the statistically rigorous methods like
linear regression or additive splines and the powerful deep methods using neural networks …

Sensitivity analysis via the proportion of unmeasured confounding

M Bonvini, EH Kennedy - Journal of the American Statistical …, 2022 - Taylor & Francis
In observational studies, identification of ATEs is generally achieved by assuming that the
correct set of confounders has been measured and properly included in the relevant models …

Multi-fidelity gaussian process bandit optimisation

K Kandasamy, G Dasarathy, J Oliva, J Schneider… - Journal of Artificial …, 2019 - jair.org
In many scientific and engineering applications, we are tasked with the maximisation of an
expensive to evaluate black box function f. Traditional settings for this problem assume just …

Sparse modal additive model

H Chen, Y Wang, F Zheng, C Deng… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Sparse additive models have been successfully applied to high-dimensional data analysis
due to the flexibility and interpretability of their representation. However, the existing …

Exploring the linear subspace hypothesis in gender bias mitigation

F Vargas, R Cotterell - arXiv preprint arXiv:2009.09435, 2020 - arxiv.org
Bolukbasi et al.(2016) presents one of the first gender bias mitigation techniques for word
embeddings. Their method takes pre-trained word embeddings as input and attempts to …

Generalization bounds for sparse random feature expansions

A Hashemi, H Schaeffer, R Shi, U Topcu, G Tran… - Applied and …, 2023 - Elsevier
Random feature methods have been successful in various machine learning tasks, are easy
to compute, and come with theoretical accuracy bounds. They serve as an alternative …

Shrimp: Sparser random feature models via iterative magnitude pruning

Y Xie, R Shi, H Schaeffer… - … and Scientific Machine …, 2022 - proceedings.mlr.press
Sparse shrunk additive models and sparse random feature models have been developed
separately as methods to learn low-order functions, where there are few interactions …