A review of Shannon and differential entropy rate estimation
A Feutrill, M Roughan - Entropy, 2021 - mdpi.com
In this paper, we present a review of Shannon and differential entropy rate estimation
techniques. Entropy rate, which measures the average information gain from a stochastic …
techniques. Entropy rate, which measures the average information gain from a stochastic …
Empirical estimation of information measures: A literature guide
S Verdú - Entropy, 2019 - mdpi.com
We give a brief survey of the literature on the empirical estimation of entropy, differential
entropy, relative entropy, mutual information and related information measures. While those …
entropy, relative entropy, mutual information and related information measures. While those …
Mixing time estimation in ergodic markov chains from a single trajectory with contraction methods
G Wolfer - Algorithmic Learning Theory, 2020 - proceedings.mlr.press
Abstract The mixing time $ t_ {\mathsf {mix}} $ of an ergodic Markov chain measures the rate
of convergence towards its stationary distribution $\boldsymbol {\pi} $. We consider the …
of convergence towards its stationary distribution $\boldsymbol {\pi} $. We consider the …
Entropy rate estimation for Markov chains with large state space
Entropy estimation is one of the prototypical problems in distribution property testing. To
consistently estimate the Shannon entropy of a distribution on $ S $ elements with …
consistently estimate the Shannon entropy of a distribution on $ S $ elements with …
Optimal prediction of markov chains with and without spectral gap
We study the following learning problem with dependent data: Given a trajectory of length $
n $ from a stationary Markov chain with $ k $ states, the goal is to predict the distribution of …
n $ from a stationary Markov chain with $ k $ states, the goal is to predict the distribution of …
Evaluation and monitoring of free running oscillators serving as source of randomness
In this paper, we evaluate clock signals generated in ring oscillators and self-timed rings and
the way their jitter can be transformed into random numbers. We show that counting the …
the way their jitter can be transformed into random numbers. We show that counting the …
Low complexity estimation method of Rényi entropy for ergodic sources
YS Kim - Entropy, 2018 - mdpi.com
Since the entropy is a popular randomness measure, there are many studies for the
estimation of entropies for given random samples. In this paper, we propose an estimation …
estimation of entropies for given random samples. In this paper, we propose an estimation …
Estimating the fundamental limits is easier than achieving the fundamental limits
We show through case studies that it is easier to estimate the fundamental limits of data
processing than to construct the explicit algorithms to achieve those limits. Focusing on …
processing than to construct the explicit algorithms to achieve those limits. Focusing on …
Return-time-spectrum for equilibrium states with potentials of summable variation
M Abadi, V Amorim, JR Chazottes… - Ergodic Theory and …, 2023 - cambridge.org
Let be a stationary and ergodic process with joint distribution, where the random variables
take values in a finite set. Let be the first time this process repeats its first n symbols of …
take values in a finite set. Let be the first time this process repeats its first n symbols of …
Optimal prediction of Markov chains with and without spectral gap
We study the following learning problem with dependent data: Observing a trajectory of
length from a stationary Markov chain with states, the goal is to predict the next state. For …
length from a stationary Markov chain with states, the goal is to predict the next state. For …