Estimating the rate-distortion function by Wasserstein gradient descent
In the theory of lossy compression, the rate-distortion (RD) function $ R (D) $ describes how
much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion) …
much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion) …
Learned Wyner–Ziv compressors recover binning
We consider lossy compression of an information source when the decoder has lossless
access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special …
access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special …
Fundamental limitation of semantic communications: Neural estimation for rate-distortion
This paper studies the fundamental limit of semantic communications over the discrete
memoryless channel. We consider the scenario to send a semantic source consisting of an …
memoryless channel. We consider the scenario to send a semantic source consisting of an …
Information theoretic clustering for coarse-grained modeling of non-equilibrium gas dynamics
We present a new framework towards the objective of learning coarse-grained models
based on the maximum entropy principle. We show that existing methods for assigning …
based on the maximum entropy principle. We show that existing methods for assigning …
Rate distortion via constrained estimated mutual information minimization
This paper proposes a novel methodology for the estimation of the rate distortion function
(RDF) in both continuous and discrete reconstruction spaces. The approach is input-space …
(RDF) in both continuous and discrete reconstruction spaces. The approach is input-space …
Fundamental limits of two-layer autoencoders, and achieving them with gradient methods
Autoencoders are a popular model in many branches of machine learning and lossy data
compression. However, their fundamental limits, the performance of gradient methods and …
compression. However, their fundamental limits, the performance of gradient methods and …
Data-dependent generalization bounds via variable-size compressibility
M Sefidgaran, A Zaidi - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
In this paper, we establish novel data-dependent upper bounds on the generalization error
through the lens of a “variable-size compressibility” framework that we introduce newly here …
through the lens of a “variable-size compressibility” framework that we introduce newly here …
Fundamental limits of prompt compression: A rate-distortion framework for black-box language models
We formalize the problem of prompt compression for large language models (LLMs) and
present a framework to unify token-level prompt compression methods which create hard …
present a framework to unify token-level prompt compression methods which create hard …
Channel Simulation: Theory and Applications to Lossy Compression and Differential Privacy
CT Li - Foundations and Trends® in Communications and …, 2024 - nowpublishers.com
One-shot channel simulation (or channel synthesis) has seen increasing applications in
lossy compression, differential privacy and machine learning. In this setting, an encoder …
lossy compression, differential privacy and machine learning. In this setting, an encoder …
BTSC: Binary tree structure convolution layers for building interpretable decision‐making deep CNN
Although deep convolution neural network (DCNN) has achieved great success in computer
vision field, such models are considered to lack interpretability in decision‐making. One of …
vision field, such models are considered to lack interpretability in decision‐making. One of …