Survey: Exploiting data redundancy for optimization of deep learning
Data redundancy is ubiquitous in the inputs and intermediate results of Deep Neural
Networks (DNN). It offers many significant opportunities for improving DNN performance and …
Networks (DNN). It offers many significant opportunities for improving DNN performance and …
Language model compression with weighted low-rank factorization
Factorizing a large matrix into small matrices is a popular strategy for model compression.
Singular value decomposition (SVD) plays a vital role in this compression strategy …
Singular value decomposition (SVD) plays a vital role in this compression strategy …
A unified collaborative representation learning for neural-network based recommender systems
With the boosting of neural networks, recommendation methods become significantly
improved by their powerful ability of prediction and inference. Existing neural-network based …
improved by their powerful ability of prediction and inference. Existing neural-network based …
A sentence is known by the company it keeps: Improving Legal Document Summarization Using Deep Clustering
The appropriate understanding and fast processing of lengthy legal documents are
computationally challenging problems. Designing efficient automatic summarization …
computationally challenging problems. Designing efficient automatic summarization …
Text data augmentations: Permutation, antonyms and negation
G Haralabopoulos, MT Torres… - Expert Systems with …, 2021 - Elsevier
Text has traditionally been used to train automated classifiers for a multitude of purposes,
such as: classification, topic modelling and sentiment analysis. State-of-the-art LSTM …
such as: classification, topic modelling and sentiment analysis. State-of-the-art LSTM …
Morphte: Injecting morphology in tensorized embeddings
G Gan, P Zhang, S Li, X Lu… - Advances in Neural …, 2022 - proceedings.neurips.cc
In the era of deep learning, word embeddings are essential when dealing with text tasks.
However, storing and accessing these embeddings requires a large amount of space. This …
However, storing and accessing these embeddings requires a large amount of space. This …
Compression of recurrent neural networks for efficient language modeling
AM Grachev, DI Ignatov, AV Savchenko - Applied Soft Computing, 2019 - Elsevier
Recurrent neural networks have proved to be an effective method for statistical language
modeling. However, in practice their memory and run-time complexity are usually too large …
modeling. However, in practice their memory and run-time complexity are usually too large …
Numerical optimizations for weighted low-rank estimation on language model
Singular value decomposition (SVD) is one of the most popular compression methods that
approximate a target matrix with smaller matrices. However, standard SVD treats the …
approximate a target matrix with smaller matrices. However, standard SVD treats the …
Robust Low-Rank Matrix Recovery as Mixed Integer Programming via -norm Optimization
This letter focuses on the robust low-rank matrix recovery (RLRMR) in the presence of gross
sparse outliers. Instead of using-norm to reduce or suppress the influence of anomalies, we …
sparse outliers. Instead of using-norm to reduce or suppress the influence of anomalies, we …
No fine-tuning, no cry: Robust svd for compressing deep networks
A common technique for compressing a neural network is to compute the k-rank ℓ 2
approximation A k of the matrix A∈ R n× d via SVD that corresponds to a fully connected …
approximation A k of the matrix A∈ R n× d via SVD that corresponds to a fully connected …