Multiplicative filter networks

R Fathony, AK Sahu, D Willmott… - … Conference on Learning …, 2020 - openreview.net
Although deep networks are typically used to approximate functions over high dimensional
inputs, recent work has increased interest in neural networks as function approximators for …

The random feature model for input-output maps between banach spaces

NH Nelsen, AM Stuart - SIAM Journal on Scientific Computing, 2021 - SIAM
Well known to the machine learning community, the random feature model is a parametric
approximation to kernel interpolation or regression methods. It is typically used to …

Learning stabilizable nonlinear dynamics with contraction-based regularization

S Singh, SM Richards, V Sindhwani… - … Journal of Robotics …, 2021 - journals.sagepub.com
We propose a novel framework for learning stabilizable nonlinear dynamical systems for
continuous control tasks in robotics. The key contribution is a control-theoretic regularizer for …

Operator learning using random features: A tool for scientific computing

NH Nelsen, AM Stuart - SIAM Review, 2024 - SIAM
Supervised operator learning centers on the use of training data, in the form of input-output
pairs, to estimate maps between infinite-dimensional spaces. It is emerging as a powerful …

Error bounds for learning with vector-valued random features

S Lanthaler, NH Nelsen - Advances in Neural Information …, 2024 - proceedings.neurips.cc
This paper provides a comprehensive error analysis of learning with vector-valued random
features (RF). The theory is developed for RF ridge regression in a fully general infinite …

Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes

P Holderrieth, MJ Hutchinson… - … Conference on Machine …, 2021 - proceedings.mlr.press
Motivated by objects such as electric fields or fluid streams, we study the problem of learning
stochastic fields, ie stochastic processes whose samples are fields like those occurring in …

Dense-exponential random features: sharp positive estimators of the Gaussian kernel

V Likhosherstov, KM Choromanski… - Advances in …, 2024 - proceedings.neurips.cc
The problem of efficient approximation of a linear operator induced by the Gaussian or
softmax kernel is often addressed using random features (RFs) which yield an unbiased …

Nonparametric adaptive control and prediction: Theory and randomized algorithms

NM Boffi, S Tu, JJE Slotine - Journal of Machine Learning Research, 2022 - jmlr.org
A key assumption in the theory of nonlinear adaptive control is that the uncertainty of the
system can be expressed in the linear span of a set of known basis functions. While this …

Hyperparameter optimization for randomized algorithms: a case study for random features

ORA Dunbar, NH Nelsen, M Mutic - arXiv preprint arXiv:2407.00584, 2024 - arxiv.org
Randomized algorithms exploit stochasticity to reduce computational complexity. One
important example is random feature regression (RFR) that accelerates Gaussian process …

Learning contracting vector fields for stable imitation learning

V Sindhwani, S Tu, M Khansari - arXiv preprint arXiv:1804.04878, 2018 - arxiv.org
We propose a new non-parametric framework for learning incrementally stable dynamical
systems x'= f (x) from a set of sampled trajectories. We construct a rich family of smooth …