Recent advances on neural network pruning at initialization
Neural network pruning typically removes connections or neurons from a pretrained
converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to …
converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to …
Accelerating sparse deep neural networks
As neural network model sizes have dramatically increased, so has the interest in various
techniques to reduce their parameter counts and accelerate their execution. An active area …
techniques to reduce their parameter counts and accelerate their execution. An active area …
A survey of FPGA-based neural network accelerator
Recent researches on neural network have shown significant advantage in machine
learning over traditional algorithms based on handcrafted features and models. Neural …
learning over traditional algorithms based on handcrafted features and models. Neural …
ThiNet: Pruning CNN filters for a thinner net
This paper aims at accelerating and compressing deep neural networks to deploy CNN
models into small devices like mobile phones or embedded gadgets. We focus on filter level …
models into small devices like mobile phones or embedded gadgets. We focus on filter level …
Pruning filter in filter
Pruning has become a very powerful and effective technique to compress and accelerate
modern neural networks. Existing pruning methods can be grouped into two categories: filter …
modern neural networks. Existing pruning methods can be grouped into two categories: filter …
Advancing model pruning via bi-level optimization
The deployment constraints in practical applications necessitate the pruning of large-scale
deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …
deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …
Pruning Networks With Cross-Layer Ranking & k-Reciprocal Nearest Filters
This article focuses on filter-level network pruning. A novel pruning method, termed CLR-
RNF, is proposed. We first reveal a “long-tail” pruning problem in magnitude-based weight …
RNF, is proposed. We first reveal a “long-tail” pruning problem in magnitude-based weight …
14.3 A 65nm computing-in-memory-based CNN processor with 2.9-to-35.8 TOPS/W system energy efficiency using dynamic-sparsity performance-scaling architecture …
Computing-in-Memory (CIM) is a promising solution for energy-efficient neural network (NN)
processors. Previous CIM chips [1],[4] mainly focus on the memory macro itself, lacking …
processors. Previous CIM chips [1],[4] mainly focus on the memory macro itself, lacking …
Pruning the pilots: Deep learning-based pilot design and channel estimation for MIMO-OFDM systems
MB Mashhadi, D Gündüz - IEEE Transactions on Wireless …, 2021 - ieeexplore.ieee.org
With the large number of antennas and subcarriers the overhead due to pilot transmission
for channel estimation can be prohibitive in wideband massive multiple-input multiple-output …
for channel estimation can be prohibitive in wideband massive multiple-input multiple-output …
Channel permutations for n: m sparsity
We introduce channel permutations as a method to maximize the accuracy of N: M sparse
networks. N: M sparsity requires N out of M consecutive elements to be zero and has been …
networks. N: M sparsity requires N out of M consecutive elements to be zero and has been …