An information-theoretic perspective on variance-invariance-covariance regularization
In this paper, we provide an information-theoretic perspective on Variance-Invariance-
Covariance Regularization (VICReg) for self-supervised learning. To do so, we first …
Covariance Regularization (VICReg) for self-supervised learning. To do so, we first …
An information theory perspective on variance-invariance-covariance regularization
R Shwartz-Ziv, R Balestriero… - Advances in …, 2023 - proceedings.neurips.cc
Abstract Variance-Invariance-Covariance Regularization (VICReg) is a self-supervised
learning (SSL) method that has shown promising results on a variety of tasks. However, the …
learning (SSL) method that has shown promising results on a variety of tasks. However, the …
martFL: Enabling Utility-Driven Data Marketplace with a Robust and Verifiable Federated Learning Architecture
The development of machine learning models requires a large amount of training data. Data
marketplace is a critical platform to trade high-quality and private-domain data that is not …
marketplace is a critical platform to trade high-quality and private-domain data that is not …
Fine-grained data distribution alignment for post-training quantization
While post-training quantization receives popularity mostly due to its evasion in accessing
the original complete training dataset, its poor performance also stems from scarce images …
the original complete training dataset, its poor performance also stems from scarce images …
Vq4dit: Efficient post-training vector quantization for diffusion transformers
The Diffusion Transformers Models (DiTs) have transitioned the network architecture from
traditional UNets to transformers, demonstrating exceptional capabilities in image …
traditional UNets to transformers, demonstrating exceptional capabilities in image …
Sub-8-bit quantization for on-device speech recognition: A regularization-free approach
For on-device automatic speech recognition (ASR), quantization aware training (QAT) is
ubiquitous to achieve the trade-off between model predictive performance and efficiency …
ubiquitous to achieve the trade-off between model predictive performance and efficiency …
Yono: Modeling multiple heterogeneous neural networks on microcontrollers
Internet of Things (IoT) systems provide large amounts of data on all aspects of human
behavior. Machine learning techniques, especially deep neural networks (DNN), have …
behavior. Machine learning techniques, especially deep neural networks (DNN), have …
Enabling on-device smartphone GPU based training: Lessons learned
Deep Learning (DL) has shown impressive performance in many mobile applications. Most
existing works have focused on reducing the computational and resource overheads of …
existing works have focused on reducing the computational and resource overheads of …
A noise-driven heterogeneous stochastic computing multiplier for heuristic precision improvement in energy-efficient dnns
J Wang, H Chen, D Wang, K Mei… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Stochastic computing (SC) has become a promising approximate computing solution by its
negligible resource occupancy and ultralow energy consumption. As a potential …
negligible resource occupancy and ultralow energy consumption. As a potential …
Gptvq: The blessing of dimensionality for llm quantization
In this work we show that the size versus accuracy trade-off of neural network quantization
can be significantly improved by increasing the quantization dimensionality. We propose the …
can be significantly improved by increasing the quantization dimensionality. We propose the …