Estimating the rate-distortion function by Wasserstein gradient descent

Y Yang, S Eckstein, M Nutz… - Advances in Neural …, 2024 - proceedings.neurips.cc
In the theory of lossy compression, the rate-distortion (RD) function $ R (D) $ describes how
much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion) …

Learned Wyner–Ziv compressors recover binning

E Özyılkan, J Ballé, E Erkip - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
We consider lossy compression of an information source when the decoder has lossless
access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special …

Fundamental limitation of semantic communications: Neural estimation for rate-distortion

D Li, J Huang, C Huang, X Qin… - Journal of …, 2023 - ieeexplore.ieee.org
This paper studies the fundamental limit of semantic communications over the discrete
memoryless channel. We consider the scenario to send a semantic source consisting of an …

Information theoretic clustering for coarse-grained modeling of non-equilibrium gas dynamics

C Jacobsen, I Zanardi, S Bhola, K Duraisamy… - Journal of …, 2024 - Elsevier
We present a new framework towards the objective of learning coarse-grained models
based on the maximum entropy principle. We show that existing methods for assigning …

Rate distortion via constrained estimated mutual information minimization

D Tsur, B Huleihel, H Permuter - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
This paper proposes a novel methodology for the estimation of the rate distortion function
(RDF) in both continuous and discrete reconstruction spaces. The approach is input-space …

Fundamental limits of two-layer autoencoders, and achieving them with gradient methods

A Shevchenko, K Kögler, H Hassani… - … on Machine Learning, 2023 - proceedings.mlr.press
Autoencoders are a popular model in many branches of machine learning and lossy data
compression. However, their fundamental limits, the performance of gradient methods and …

Data-dependent generalization bounds via variable-size compressibility

M Sefidgaran, A Zaidi - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
In this paper, we establish novel data-dependent upper bounds on the generalization error
through the lens of a “variable-size compressibility” framework that we introduce newly here …

Fundamental limits of prompt compression: A rate-distortion framework for black-box language models

A Girish, A Nagle, M Bondaschi, M Gastpar… - arXiv preprint arXiv …, 2024 - arxiv.org
We formalize the problem of prompt compression for large language models (LLMs) and
present a framework to unify token-level prompt compression methods which create hard …

Channel Simulation: Theory and Applications to Lossy Compression and Differential Privacy

CT Li - Foundations and Trends® in Communications and …, 2024 - nowpublishers.com
One-shot channel simulation (or channel synthesis) has seen increasing applications in
lossy compression, differential privacy and machine learning. In this setting, an encoder …

BTSC: Binary tree structure convolution layers for building interpretable decision‐making deep CNN

Y Wang, D Dai, D Liu, S Xia… - CAAI Transactions on …, 2024 - Wiley Online Library
Although deep convolution neural network (DCNN) has achieved great success in computer
vision field, such models are considered to lack interpretability in decision‐making. One of …