Survey: Exploiting data redundancy for optimization of deep learning

JA Chen, W Niu, B Ren, Y Wang, X Shen - ACM Computing Surveys, 2023 - dl.acm.org
Data redundancy is ubiquitous in the inputs and intermediate results of Deep Neural
Networks (DNN). It offers many significant opportunities for improving DNN performance and …

Language model compression with weighted low-rank factorization

YC Hsu, T Hua, S Chang, Q Lou, Y Shen… - arXiv preprint arXiv …, 2022 - arxiv.org
Factorizing a large matrix into small matrices is a popular strategy for model compression.
Singular value decomposition (SVD) plays a vital role in this compression strategy …

A unified collaborative representation learning for neural-network based recommender systems

Y Xu, E Wang, Y Yang, Y Chang - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
With the boosting of neural networks, recommendation methods become significantly
improved by their powerful ability of prediction and inference. Existing neural-network based …

A sentence is known by the company it keeps: Improving Legal Document Summarization Using Deep Clustering

D Jain, MD Borah, A Biswas - Artificial Intelligence and Law, 2024 - Springer
The appropriate understanding and fast processing of lengthy legal documents are
computationally challenging problems. Designing efficient automatic summarization …

Text data augmentations: Permutation, antonyms and negation

G Haralabopoulos, MT Torres… - Expert Systems with …, 2021 - Elsevier
Text has traditionally been used to train automated classifiers for a multitude of purposes,
such as: classification, topic modelling and sentiment analysis. State-of-the-art LSTM …

Morphte: Injecting morphology in tensorized embeddings

G Gan, P Zhang, S Li, X Lu… - Advances in Neural …, 2022 - proceedings.neurips.cc
In the era of deep learning, word embeddings are essential when dealing with text tasks.
However, storing and accessing these embeddings requires a large amount of space. This …

Compression of recurrent neural networks for efficient language modeling

AM Grachev, DI Ignatov, AV Savchenko - Applied Soft Computing, 2019 - Elsevier
Recurrent neural networks have proved to be an effective method for statistical language
modeling. However, in practice their memory and run-time complexity are usually too large …

Numerical optimizations for weighted low-rank estimation on language model

T Hua, YC Hsu, F Wang, Q Lou, Y Shen… - arXiv preprint arXiv …, 2022 - arxiv.org
Singular value decomposition (SVD) is one of the most popular compression methods that
approximate a target matrix with smaller matrices. However, standard SVD treats the …

Robust Low-Rank Matrix Recovery as Mixed Integer Programming via -norm Optimization

ZL Shi, XP Li, W Li, T Yan, J Wang… - IEEE Signal Processing …, 2023 - ieeexplore.ieee.org
This letter focuses on the robust low-rank matrix recovery (RLRMR) in the presence of gross
sparse outliers. Instead of using-norm to reduce or suppress the influence of anomalies, we …

No fine-tuning, no cry: Robust svd for compressing deep networks

M Tukan, A Maalouf, M Weksler, D Feldman - Sensors, 2021 - mdpi.com
A common technique for compressing a neural network is to compute the k-rank ℓ 2
approximation A k of the matrix A∈ R n× d via SVD that corresponds to a fully connected …