Private empirical risk minimization: Efficient algorithms and tight error bounds
Convex empirical risk minimization is a basic tool in machine learning and statistics. We
provide new algorithms and matching lower bounds for differentially private convex …
provide new algorithms and matching lower bounds for differentially private convex …
Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices
S Vempala, A Wibisono - Advances in neural information …, 2019 - proceedings.neurips.cc
Abstract We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …
An introduction to MCMC for machine learning
This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo
method with emphasis on probabilistic machine learning. Second, it reviews the main …
method with emphasis on probabilistic machine learning. Second, it reviews the main …
[PDF][PDF] Random walks on graphs
L Lovász - Combinatorics, Paul erdos is eighty, 1993 - cs.yale.edu
Various aspects of the theory of random walks on graphs are surveyed. In particular,
estimates on the important parameters of access time, commute time, cover time and mixing …
estimates on the important parameters of access time, commute time, cover time and mixing …
[图书][B] Geometric algorithms and combinatorial optimization
Historically, there is a close connection between geometry and optImization. This is
illustrated by methods like the gradient method and the simplex method, which are …
illustrated by methods like the gradient method and the simplex method, which are …
Lower bounds for covering times for reversible Markov chains and random walks on graphs
DJ Aldous - Journal of Theoretical Probability, 1989 - Springer
For simple random walk on a N-vertex graph, the mean time to cover all vertices is at least
cN log (N), where c> 0 is an absolute constant. This is deduced from a more general result …
cN log (N), where c> 0 is an absolute constant. This is deduced from a more general result …
[PDF][PDF] Bayesian Inverse Reinforcement Learning.
D Ramachandran, E Amir - IJCAI, 2007 - academia.edu
Abstract Inverse Reinforcement Learning (IRL) is the problem of learning the reward function
underlying a Markov Decision Process given the dynamics of the system and the behaviour …
underlying a Markov Decision Process given the dynamics of the system and the behaviour …
[图书][B] Randomized algorithms for analysis and control of uncertain systems: with applications
The presence of uncertainty in a system description has always been a critical issue in
control. The main objective of Randomized Algorithms for Analysis and Control of Uncertain …
control. The main objective of Randomized Algorithms for Analysis and Control of Uncertain …
Privacy for free: Posterior sampling and stochastic gradient monte carlo
We consider the problem of Bayesian learning on sensitive datasets and present two simple
but somewhat surprising results that connect Bayesian learning to “differential privacy”, a …
but somewhat surprising results that connect Bayesian learning to “differential privacy”, a …
[PDF][PDF] Counting linear extensions is# P-complete
G Brightwell, P Winkler - Proceedings of the twenty-third annual ACM …, 1991 - dl.acm.org
We show that the problem of counting the number of linear extensions of a given partially
ordered set is# P-complete. This settles a long-standing open question and contrssts with …
ordered set is# P-complete. This settles a long-standing open question and contrssts with …