Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations
The randomly pivoted Cholesky algorithm (RPCholesky) computes a factorized rank‐kk
approximation of an N× NN*N positive‐semidefinite (psd) matrix. RPCholesky requires only …
approximation of an N× NN*N positive‐semidefinite (psd) matrix. RPCholesky requires only …
Estimating Koopman operators with sketching to provably learn large scale dynamical systems
The theory of Koopman operators allows to deploy non-parametric machine learning
algorithms to predict and analyze complex dynamical systems. Estimators such as principal …
algorithms to predict and analyze complex dynamical systems. Estimators such as principal …
Improved Bayesian regret bounds for Thompson sampling in reinforcement learning
A Moradipari, M Pedramfar… - Advances in …, 2023 - proceedings.neurips.cc
In this paper, we prove state-of-the-art Bayesian regret bounds for Thompson Sampling in
reinforcement learning in a multitude of settings. We present a refined analysis of the …
reinforcement learning in a multitude of settings. We present a refined analysis of the …
Have ASkotch: Fast Methods for Large-scale, Memory-constrained Kernel Ridge Regression
Kernel ridge regression (KRR) is a fundamental computational tool, appearing in problems
that range from computational chemistry to health analytics, with a particular interest due to …
that range from computational chemistry to health analytics, with a particular interest due to …
Theoretical insights on the pre-image resolution in machine learning
P Honeine - Pattern Recognition, 2024 - Elsevier
While many nonlinear pattern recognition and data mining tasks rely on embedding the data
into a latent space, one often needs to extract the patterns in the input space. Estimating the …
into a latent space, one often needs to extract the patterns in the input space. Estimating the …
Enhancing Kernel Flexibility via Learning Asymmetric Locally-Adaptive Kernels
The lack of sufficient flexibility is the key bottleneck of kernel-based learning that relies on
manually designed, pre-given, and non-trainable kernels. To enhance kernel flexibility, this …
manually designed, pre-given, and non-trainable kernels. To enhance kernel flexibility, this …
On the Nyström Approximation for Preconditioning in Kernel Machines
A Abedsoltan, P Pandit… - International …, 2024 - proceedings.mlr.press
Kernel methods are a popular class of nonlinear predictive models in machine learning.
Scalable algorithms for learning kernel models need to be iterative in nature, but …
Scalable algorithms for learning kernel models need to be iterative in nature, but …
GPS-Net: Discovering prognostic pathway modules based on network regularized kernel learning
The search for prognostic biomarkers capable of predicting patient outcomes, by analyzing
gene expression in tissue samples and other molecular profiles, remains largely focused on …
gene expression in tissue samples and other molecular profiles, remains largely focused on …
Column and row subset selection using nuclear scores: algorithms and theory for Nystr\"{o} m approximation, CUR decomposition, and graph Laplacian reduction
Column selection is an essential tool for structure-preserving low-rank approximation, with
wide-ranging applications across many fields, such as data science, machine learning, and …
wide-ranging applications across many fields, such as data science, machine learning, and …
A theoretical design of concept sets: improving the predictability of concept bottleneck models
MR Luyten, M van der Schaar - The Thirty-eighth Annual …, 2024 - openreview.net
Concept-based learning, a promising approach in machine learning, emphasizes the value
of high-level representations called concepts. However, despite growing interest in concept …
of high-level representations called concepts. However, despite growing interest in concept …