[PDF][PDF] Tinytrain: Deep neural network training at the extreme edge

YD Kwon, R Li, SI Venieris… - arXiv preprint arXiv …, 2023 - theyoungkwon.github.io
arXiv preprint arXiv:2307.09988, 2023theyoungkwon.github.io
On-device training is essential for user personalisation and privacy. With the pervasiveness
of IoT devices and microcontroller units (MCU), this task becomes more challenging due to
the constrained memory and compute resources, and the limited availability of labelled user
data. Nonetheless, prior works neglect the data scarcity issue, require excessively long
training time (eg a few hours), or induce substantial accuracy loss (≥ 10%). We propose
TinyTrain, an on-device training approach that drastically reduces training time by …
Abstract
On-device training is essential for user personalisation and privacy. With the pervasiveness of IoT devices and microcontroller units (MCU), this task becomes more challenging due to the constrained memory and compute resources, and the limited availability of labelled user data. Nonetheless, prior works neglect the data scarcity issue, require excessively long training time (eg a few hours), or induce substantial accuracy loss (≥ 10%). We propose TinyTrain, an on-device training approach that drastically reduces training time by selectively updating parts of the model and explicitly coping with data scarcity. TinyTrain introduces a task-adaptive sparse-update method that dynamically selects the layer/channel based on a multi-objective criterion that jointly captures user data, the memory, and the compute capabilities of the target device, leading to high accuracy on unseen tasks with reduced computation and memory footprint. TinyTrain outperforms vanilla fine-tuning of the entire network by 3.6-5.0% in accuracy, while reducing the backward-pass memory and computation cost by up to 2,286× and 7.68×, respectively. Targeting broadly used real-world edge devices, TinyTrain achieves 9.5× faster and 3.5× more energy-efficient training over status-quo approaches, and 2.8× smaller memory footprint than SOTA approaches, while remaining within the 1 MB memory envelope of MCU-grade platforms.
theyoungkwon.github.io
以上显示的是最相近的搜索结果。 查看全部搜索结果