TinyML for ultra-low power AI and large scale IoT deployments: A systematic review
The rapid emergence of low-power embedded devices and modern machine learning (ML)
algorithms has created a new Internet of Things (IoT) era where lightweight ML frameworks …
algorithms has created a new Internet of Things (IoT) era where lightweight ML frameworks …
TinyML: A systematic review and synthesis of existing research
Tiny Machine Learning (TinyML), a rapidly evolving edge computing concept that links
embedded systems (hardware and software) and machine learning, with the purpose of …
embedded systems (hardware and software) and machine learning, with the purpose of …
On-device training under 256kb memory
On-device training enables the model to adapt to new data collected from the sensors by
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …
Memory-efficient patch-based inference for tiny deep learning
Tiny deep learning on microcontroller units (MCUs) is challenging due to the limited memory
size. We find that the memory bottleneck is due to the imbalanced memory distribution in …
size. We find that the memory bottleneck is due to the imbalanced memory distribution in …
Tiny machine learning: progress and futures [feature]
Tiny machine learning (TinyML) is a new frontier of machine learning. By squeezing deep
learning models into billions of IoT devices and microcontrollers (MCUs), we expand the …
learning models into billions of IoT devices and microcontrollers (MCUs), we expand the …
StreamNet: memory-efficient streaming tiny deep learning inference on the microcontroller
Abstract With the emerging Tiny Machine Learning (TinyML) inference applications, there is
a growing interest when deploying TinyML models on the low-power Microcontroller Unit …
a growing interest when deploying TinyML models on the low-power Microcontroller Unit …
Scolar: A spiking digital accelerator with dual fixed point for continual learning
Spiking neural network models when deployed in dynamic environments, catastrophically
forget previously learned tasks. In this paper, we propose a reconfigurable spiking digital …
forget previously learned tasks. In this paper, we propose a reconfigurable spiking digital …
Leveraging large language models for peptide antibiotic design
Large language models (LLMs) have significantly impacted various domains of our society,
including recent applications in complex fields such as biology and chemistry. These …
including recent applications in complex fields such as biology and chemistry. These …
Design of leading zero counters on FPGAs
This letter presents a novel leading zero counter (LZC) able to efficiently exploits the
hardware resources available within state-of-the-art FPGA devices to achieve high-speed …
hardware resources available within state-of-the-art FPGA devices to achieve high-speed …
ACTION: A utomated Hardware-Software C odesign Framework for Low-precision Numerical Format Selec TION in TinyML
In this paper, a new low-precision hardware-software codesign framework is presented, to
optimally select the numerical formats and bit-precision for TinyML models and benchmarks …
optimally select the numerical formats and bit-precision for TinyML models and benchmarks …