Tiny machine learning: progress and futures [feature]
Tiny machine learning (TinyML) is a new frontier of machine learning. By squeezing deep
learning models into billions of IoT devices and microcontrollers (MCUs), we expand the …
learning models into billions of IoT devices and microcontrollers (MCUs), we expand the …
On-device training under 256kb memory
On-device training enables the model to adapt to new data collected from the sensors by
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …
Tinytl: Reduce memory, not parameters for efficient on-device learning
Efficient on-device learning requires a small memory footprint at training time to fit the tight
memory constraint. Existing work solves this problem by reducing the number of trainable …
memory constraint. Existing work solves this problem by reducing the number of trainable …
Knowledge transfer via pre-training for recommendation: A review and prospect
Recommender systems aim to provide item recommendations for users and are usually
faced with data sparsity problems (eg, cold start) in real-world scenarios. Recently pre …
faced with data sparsity problems (eg, cold start) in real-world scenarios. Recently pre …
Enable deep learning on mobile devices: Methods, systems, and applications
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial
intelligence (AI), including computer vision, natural language processing, and speech …
intelligence (AI), including computer vision, natural language processing, and speech …
Multi-task federated learning for personalised deep neural networks in edge computing
Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural
Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous …
Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous …
Attentive single-tasking of multiple tasks
KK Maninis, I Radosavovic… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
In this work we address task interference in universal networks by considering that a network
is trained on multiple tasks, but performs one task at a time, an approach we refer to as" …
is trained on multiple tasks, but performs one task at a time, an approach we refer to as" …
Conv-adapter: Exploring parameter efficient transfer learning for convnets
While parameter efficient tuning (PET) methods have shown great potential with transformer
architecture on Natural Language Processing (NLP) tasks their effectiveness with large …
architecture on Natural Language Processing (NLP) tasks their effectiveness with large …
Parameter-efficient transfer from sequential behaviors for user modeling and recommendation
Inductive transfer learning has had a big impact on computer vision and NLP domains but
has not been used in the area of recommender systems. Even though there has been a …
has not been used in the area of recommender systems. Even though there has been a …
Multi-scale aligned distillation for low-resolution detection
In instance-level detection tasks (eg, object detection), reducing input resolution is an easy
option to improve runtime efficiency. However, this option severely hurts the detection …
option to improve runtime efficiency. However, this option severely hurts the detection …