Randomly pivoted Cholesky: Practical approximation of a kernel matrix with few entry evaluations

Y Chen, EN Epperly, JA Tropp… - … on Pure and Applied …, 2023 - Wiley Online Library
The randomly pivoted Cholesky algorithm (RPCholesky) computes a factorized rank‐kk
approximation of an N× NN*N positive‐semidefinite (psd) matrix. RPCholesky requires only …

Estimating Koopman operators with sketching to provably learn large scale dynamical systems

G Meanti, A Chatalic, V Kostic… - Advances in …, 2024 - proceedings.neurips.cc
The theory of Koopman operators allows to deploy non-parametric machine learning
algorithms to predict and analyze complex dynamical systems. Estimators such as principal …

Improved Bayesian regret bounds for Thompson sampling in reinforcement learning

A Moradipari, M Pedramfar… - Advances in …, 2023 - proceedings.neurips.cc
In this paper, we prove state-of-the-art Bayesian regret bounds for Thompson Sampling in
reinforcement learning in a multitude of settings. We present a refined analysis of the …

Have ASkotch: Fast Methods for Large-scale, Memory-constrained Kernel Ridge Regression

P Rathore, Z Frangella, M Udell - arXiv preprint arXiv:2407.10070, 2024 - arxiv.org
Kernel ridge regression (KRR) is a fundamental computational tool, appearing in problems
that range from computational chemistry to health analytics, with a particular interest due to …

Theoretical insights on the pre-image resolution in machine learning

P Honeine - Pattern Recognition, 2024 - Elsevier
While many nonlinear pattern recognition and data mining tasks rely on embedding the data
into a latent space, one often needs to extract the patterns in the input space. Estimating the …

Enhancing Kernel Flexibility via Learning Asymmetric Locally-Adaptive Kernels

F He, M He, L Shi, X Huang, JAK Suykens - arXiv preprint arXiv …, 2023 - arxiv.org
The lack of sufficient flexibility is the key bottleneck of kernel-based learning that relies on
manually designed, pre-given, and non-trainable kernels. To enhance kernel flexibility, this …

On the Nyström Approximation for Preconditioning in Kernel Machines

A Abedsoltan, P Pandit… - International …, 2024 - proceedings.mlr.press
Kernel methods are a popular class of nonlinear predictive models in machine learning.
Scalable algorithms for learning kernel models need to be iterative in nature, but …

GPS-Net: Discovering prognostic pathway modules based on network regularized kernel learning

S Yao, K Li, T Li, X Yu, PF Kuan, X Wang - The American Journal of Human …, 2024 - cell.com
The search for prognostic biomarkers capable of predicting patient outcomes, by analyzing
gene expression in tissue samples and other molecular profiles, remains largely focused on …

Column and row subset selection using nuclear scores: algorithms and theory for Nystr\"{o} m approximation, CUR decomposition, and graph Laplacian reduction

M Fornace, M Lindsey - arXiv preprint arXiv:2407.01698, 2024 - arxiv.org
Column selection is an essential tool for structure-preserving low-rank approximation, with
wide-ranging applications across many fields, such as data science, machine learning, and …

A theoretical design of concept sets: improving the predictability of concept bottleneck models

MR Luyten, M van der Schaar - The Thirty-eighth Annual …, 2024 - openreview.net
Concept-based learning, a promising approach in machine learning, emphasizes the value
of high-level representations called concepts. However, despite growing interest in concept …