关注
Max Sponner
Max Sponner
未知所在单位机构
在 infineon.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Adapting Neural Networks at Runtime: Current Trends in At-Runtime Optimizations for Deep Learning
M Sponner, B Waschneck, A Kumar
ACM Computing Surveys 56 (10), 1-40, 2024
62024
Compiler toolchains for deep learning workloads on embedded platforms
M Sponner, B Waschneck, A Kumar
arXiv preprint arXiv:2104.04576, 2021
62021
Temporal Patience: Efficient Adaptive Deep Learning for Embedded Radar Data Processing
M Sponner, J Ott, L Servadei, B Waschneck, R Wille, A Kumar
arXiv preprint arXiv:2309.05686, 2023
52023
AI-driven performance modeling for AI inference workloads
M Sponner, B Waschneck, A Kumar
Electronics 11 (15), 2316, 2022
52022
Temporal Decisions: Leveraging Temporal Correlation for Efficient Decisions in Early Exit Neural Networks
M Sponner, L Servadei, B Waschneck, R Wille, A Kumar
arXiv preprint arXiv:2403.07958, 2024
22024
Harnessing Temporal Information for Efficient Edge AI
M Sponner, L Servadei, B Waschneck, R Wille, A Kumar
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC), 5-13, 2024
12024
Efficient Post-Training Augmentation for Adaptive Inference in Heterogeneous and Distributed IoT Environments
M Sponner, L Servadei, B Waschneck, R Wille, A Kumar
arXiv preprint arXiv:2403.07957, 2024
12024
Leveraging Temporal Patterns: Automated Augmentation to Create Temporal Early Exit Networks for Efficient Edge AI
M Sponner, L Servadei, B Waschneck, R Wille, A Kumar
IEEE Access, 2024
2024
Early-exit neural networks for radar processing
M Sponner, L Servadei, B Waschneck
US Patent App. 18/584,629, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–9