Align: A highly accurate adaptive layerwise log_2_lead quantization of pre-trained neural networks

S Gupta, S Ullah, K Ahuja, A Tiwari, A Kumar - IEEE Access, 2020 - ieeexplore.ieee.org
Deep Neural Networks are one of the machine learning techniques which are increasingly
used in a variety of applications. However, the significantly high memory and computation …

Monte carlo gradient quantization

G Mordido, M Van Keirsbilck… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Abstract We propose Monte Carlo methods to leverage both sparsity and quantization to
compress gradients of neural networks throughout training. On top of reducing the …

Fast and Accurate Output Error Estimation for Memristor-Based Deep Neural Networks

J Kern, S Henwood, G Mordido… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Memristors allow computing in memory, which may be leveraged by deep neural network
(DNN) accelerators to reduce energy footprint. However, such gains in energy efficiency …

Training DNNs Resilient to Adversarial and Random Bit-Flips by Learning Quantization Ranges

K Chitsaz, G Mordido, JP David… - … on Machine Learning …, 2023 - openreview.net
Promoting robustness in deep neural networks (DNNs) is crucial for their reliable
deployment in uncertain environments, such as low-power settings or in the presence of …

SAMSON: Sharpness-Aware Minimization Scaled by Outlier Normalization for Improving DNN Generalization and Robustness

G Mordido, S Henwood, S Chandar… - arXiv preprint arXiv …, 2022 - arxiv.org
Energy-efficient deep neural network (DNN) accelerators are prone to non-idealities that
degrade DNN performance at inference time. To mitigate such degradation, existing …

Design, Analysis, and Applications of Approximate Arithmetic Modules

S Ullah - 2022 - tud.qucosa.de
Abstract (EN) From the initial computing machines, Colossus of 1943 and ENIAC of 1945, to
modern high-performance data centers and Internet of Things (IOTs), four design goals, ie …