Tinyml meets iot: A comprehensive survey
L Dutta, S Bharali - Internet of Things, 2021 - Elsevier
The rapid growth in miniaturization of low-power embedded devices and advancement in
the optimization of machine learning (ML) algorithms have opened up a new prospect of the …
the optimization of machine learning (ML) algorithms have opened up a new prospect of the …
Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …
solving the most complex problem statements. However, these models are huge in size with …
Decoupled knowledge distillation
State-of-the-art distillation methods are mainly based on distilling deep features from
intermediate layers, while the significance of logit distillation is greatly overlooked. To …
intermediate layers, while the significance of logit distillation is greatly overlooked. To …
Sam-clip: Merging vision foundation models towards semantic and spatial understanding
The landscape of publicly available vision foundation models (VFMs) such as CLIP and
SAM is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their …
SAM is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their …
R-drop: Regularized dropout for neural networks
Dropout is a powerful and widely used technique to regularize the training of deep neural
networks. Though effective and performing well, the randomness introduced by dropout …
networks. Though effective and performing well, the randomness introduced by dropout …
Knowledge distillation with the reused teacher classifier
Abstract Knowledge distillation aims to compress a powerful yet cumbersome teacher model
into a lightweight student model without much sacrifice of performance. For this purpose …
into a lightweight student model without much sacrifice of performance. For this purpose …
Weak-to-strong generalization: Eliciting strong capabilities with weak supervision
Widely used alignment techniques, such as reinforcement learning from human feedback
(RLHF), rely on the ability of humans to supervise model behavior-for example, to evaluate …
(RLHF), rely on the ability of humans to supervise model behavior-for example, to evaluate …
L2g: A simple local-to-global knowledge transfer framework for weakly supervised semantic segmentation
Mining precise class-aware attention maps, aka, class activation maps, is essential for
weakly supervised semantic segmentation. In this paper, we present L2G, a simple online …
weakly supervised semantic segmentation. In this paper, we present L2G, a simple online …
[HTML][HTML] BirdNET: A deep learning solution for avian diversity monitoring
Variation in avian diversity in space and time is commonly used as a metric to assess
environmental changes. Conventionally, such data were collected by expert observers, but …
environmental changes. Conventionally, such data were collected by expert observers, but …
Knowledge distillation: A survey
In recent years, deep neural networks have been successful in both industry and academia,
especially for computer vision tasks. The great success of deep learning is mainly due to its …
especially for computer vision tasks. The great success of deep learning is mainly due to its …