Tiny machine learning: progress and futures [feature]

J Lin, L Zhu, WM Chen, WC Wang… - IEEE Circuits and …, 2023 - ieeexplore.ieee.org
Tiny machine learning (TinyML) is a new frontier of machine learning. By squeezing deep
learning models into billions of IoT devices and microcontrollers (MCUs), we expand the …

On-device training under 256kb memory

J Lin, L Zhu, WM Chen, WC Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
On-device training enables the model to adapt to new data collected from the sensors by
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …

Tinytl: Reduce memory, not parameters for efficient on-device learning

H Cai, C Gan, L Zhu, S Han - Advances in Neural …, 2020 - proceedings.neurips.cc
Efficient on-device learning requires a small memory footprint at training time to fit the tight
memory constraint. Existing work solves this problem by reducing the number of trainable …

Knowledge transfer via pre-training for recommendation: A review and prospect

Z Zeng, C Xiao, Y Yao, R Xie, Z Liu, F Lin, L Lin… - Frontiers in big …, 2021 - frontiersin.org
Recommender systems aim to provide item recommendations for users and are usually
faced with data sparsity problems (eg, cold start) in real-world scenarios. Recently pre …

Enable deep learning on mobile devices: Methods, systems, and applications

H Cai, J Lin, Y Lin, Z Liu, H Tang, H Wang… - ACM Transactions on …, 2022 - dl.acm.org
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial
intelligence (AI), including computer vision, natural language processing, and speech …

Multi-task federated learning for personalised deep neural networks in edge computing

J Mills, J Hu, G Min - IEEE Transactions on Parallel and …, 2021 - ieeexplore.ieee.org
Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural
Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous …

Attentive single-tasking of multiple tasks

KK Maninis, I Radosavovic… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
In this work we address task interference in universal networks by considering that a network
is trained on multiple tasks, but performs one task at a time, an approach we refer to as" …

Conv-adapter: Exploring parameter efficient transfer learning for convnets

H Chen, R Tao, H Zhang, Y Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
While parameter efficient tuning (PET) methods have shown great potential with transformer
architecture on Natural Language Processing (NLP) tasks their effectiveness with large …

Parameter-efficient transfer from sequential behaviors for user modeling and recommendation

F Yuan, X He, A Karatzoglou, L Zhang - Proceedings of the 43rd …, 2020 - dl.acm.org
Inductive transfer learning has had a big impact on computer vision and NLP domains but
has not been used in the area of recommender systems. Even though there has been a …

Multi-scale aligned distillation for low-resolution detection

L Qi, J Kuen, J Gu, Z Lin, Y Wang… - Proceedings of the …, 2021 - openaccess.thecvf.com
In instance-level detection tasks (eg, object detection), reducing input resolution is an easy
option to improve runtime efficiency. However, this option severely hurts the detection …