A tutorial on kernel density estimation and recent advances

YC Chen - Biostatistics & Epidemiology, 2017 - Taylor & Francis
This tutorial provides a gentle introduction to kernel density estimation (KDE) and recent
advances regarding confidence bands and geometric/topological features. We begin with a …

Gaussian processes and kernel methods: A review on connections and equivalences

M Kanagawa, P Hennig, D Sejdinovic… - arXiv preprint arXiv …, 2018 - arxiv.org
This paper is an attempt to bridge the conceptual gaps between researchers working on the
two widely used approaches based on positive definite kernels: Bayesian learning or …

Cutpaste: Self-supervised learning for anomaly detection and localization

CL Li, K Sohn, J Yoon, T Pfister - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We aim at constructing a high performance model for defect detection that detects unknown
anomalous patterns of an image without anomalous data. To this end, we propose a two …

Deep learning: a statistical viewpoint

PL Bartlett, A Montanari, A Rakhlin - Acta numerica, 2021 - cambridge.org
The remarkable practical success of deep learning has revealed some major surprises from
a theoretical perspective. In particular, simple gradient methods easily find near-optimal …

Distribution matching for crowd counting

B Wang, H Liu, D Samaras… - Advances in neural …, 2020 - proceedings.neurips.cc
In crowd counting, each training image contains multiple people, where each person is
annotated by a dot. Existing crowd counting methods need to use a Gaussian to smooth …

Towards optimal doubly robust estimation of heterogeneous causal effects

EH Kennedy - Electronic Journal of Statistics, 2023 - projecteuclid.org
Heterogeneous effect estimation is crucial in causal inference, with applications across
medicine and social science. Many methods for estimating conditional average treatment …

A theoretical analysis of deep Q-learning

J Fan, Z Wang, Y Xie, Z Yang - Learning for dynamics and …, 2020 - proceedings.mlr.press
Despite the great empirical success of deep reinforcement learning, its theoretical
foundation is less well understood. In this work, we make the first attempt to theoretically …

Learning and evaluating representations for deep one-class classification

K Sohn, CL Li, J Yoon, M Jin, T Pfister - arXiv preprint arXiv:2011.02578, 2020 - arxiv.org
We present a two-stage framework for deep one-class classification. We first learn self-
supervised representations from one-class data, and then build one-class classifiers on …

[图书][B] Bandit algorithms

T Lattimore, C Szepesvári - 2020 - books.google.com
Decision-making in the face of uncertainty is a significant challenge in machine learning,
and the multi-armed bandit model is a commonly used framework to address it. This …

Flambe: Structural complexity and representation learning of low rank mdps

A Agarwal, S Kakade… - Advances in neural …, 2020 - proceedings.neurips.cc
In order to deal with the curse of dimensionality in reinforcement learning (RL), it is common
practice to make parametric assumptions where values or policies are functions of some low …