TinyML for ultra-low power AI and large scale IoT deployments: A systematic review

N Schizas, A Karras, C Karras, S Sioutas - Future Internet, 2022 - mdpi.com
The rapid emergence of low-power embedded devices and modern machine learning (ML)
algorithms has created a new Internet of Things (IoT) era where lightweight ML frameworks …

TinyML: A systematic review and synthesis of existing research

H Han, J Siebert - … on Artificial Intelligence in Information and …, 2022 - ieeexplore.ieee.org
Tiny Machine Learning (TinyML), a rapidly evolving edge computing concept that links
embedded systems (hardware and software) and machine learning, with the purpose of …

On-device training under 256kb memory

J Lin, L Zhu, WM Chen, WC Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
On-device training enables the model to adapt to new data collected from the sensors by
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …

Memory-efficient patch-based inference for tiny deep learning

J Lin, WM Chen, H Cai, C Gan… - Advances in Neural …, 2021 - proceedings.neurips.cc
Tiny deep learning on microcontroller units (MCUs) is challenging due to the limited memory
size. We find that the memory bottleneck is due to the imbalanced memory distribution in …

Tiny machine learning: progress and futures [feature]

J Lin, L Zhu, WM Chen, WC Wang… - IEEE Circuits and …, 2023 - ieeexplore.ieee.org
Tiny machine learning (TinyML) is a new frontier of machine learning. By squeezing deep
learning models into billions of IoT devices and microcontrollers (MCUs), we expand the …

StreamNet: memory-efficient streaming tiny deep learning inference on the microcontroller

HS Zheng, YY Liu, CF Hsu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract With the emerging Tiny Machine Learning (TinyML) inference applications, there is
a growing interest when deploying TinyML models on the low-power Microcontroller Unit …

Scolar: A spiking digital accelerator with dual fixed point for continual learning

V Karia, FT Zohora, N Soures… - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
Spiking neural network models when deployed in dynamic environments, catastrophically
forget previously learned tasks. In this paper, we propose a reconfigurable spiking digital …

Leveraging large language models for peptide antibiotic design

C Guan, FC Fernandes, OL Franco… - Cell Reports Physical …, 2024 - cell.com
Large language models (LLMs) have significantly impacted various domains of our society,
including recent applications in complex fields such as biology and chemistry. These …

Design of leading zero counters on FPGAs

S Perri, F Spagnolo, F Frustaci… - IEEE Embedded …, 2022 - ieeexplore.ieee.org
This letter presents a novel leading zero counter (LZC) able to efficiently exploits the
hardware resources available within state-of-the-art FPGA devices to achieve high-speed …

ACTION: A utomated Hardware-Software C odesign Framework for Low-precision Numerical Format Selec TION in TinyML

HF Langroudi, V Karia, T Pandit, B Mashaido… - Conference on Next …, 2022 - Springer
In this paper, a new low-precision hardware-software codesign framework is presented, to
optimally select the numerical formats and bit-precision for TinyML models and benchmarks …