关注
Charbel Sakr
Charbel Sakr
Research Scientist, NVIDIA
在 nvidia.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Analytical guarantees on numerical precision of deep neural networks
C Sakr, Y Kim, N Shanbhag
International Conference on Machine Learning, 3007-3016, 2017
1132017
PredictiveNet: An energy-efficient convolutional neural network via zero prediction
Y Lin, C Sakr, Y Kim, N Shanbhag
2017 IEEE international symposium on circuits and systems (ISCAS), 1-4, 2017
922017
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
C Sakr, N Shanbhag
International Conference on Learning Representations, 2019
542019
An analytical method to determine minimum per-layer precision of deep neural networks
C Sakr, N Shanbhag
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
482018
Fundamental limits on the precision of in-memory architectures
SK Gonugondla, C Sakr, H Dbouk, NR Shanbhag
Proceedings of the 39th International Conference on Computer-Aided Design, 1-9, 2020
432020
Hardnn: Feature map vulnerability evaluation in cnns
A Mahmoud, SKS Hari, CW Fletcher, SV Adve, C Sakr, N Shanbhag, ...
arXiv preprint arXiv:2002.09786, 2020
432020
Optimizing Selective Protection for CNN Resilience.
A Mahmoud, SKS Hari, CW Fletcher, SV Adve, C Sakr, NR Shanbhag, ...
ISSRE, 127-138, 2021
402021
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
C Sakr, N Wang, CY Chen, J Choi, A Agrawal, N Shanbhag, ...
International Conference on Learning Representations, 2019
402019
True gradient-based training of deep binary activated neural networks via continuous binarization
C Sakr, J Choi, Z Wang, K Gopalakrishnan, N Shanbhag
2018 IEEE international conference on acoustics, speech and signal …, 2018
312018
Optimal clipping and magnitude-aware differentiation for improved quantization-aware training
C Sakr, S Dai, R Venkatesan, B Zimmer, W Dally, B Khailany
International Conference on Machine Learning, 19123-19138, 2022
292022
Minimum precision requirements for the SVM-SGD learning algorithm
C Sakr, A Patil, S Zhang, Y Kim, N Shanbhag
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
252017
A 0.44-μJ/dec, 39.9-μs/dec, Recurrent Attention In-Memory Processor for Keyword Spotting
H Dbouk, SK Gonugondla, C Sakr, NR Shanbhag
IEEE Journal of Solid-State Circuits 56 (7), 2234-2244, 2020
242020
A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm
B Keller, R Venkatesan, S Dai, SG Tell, B Zimmer, C Sakr, WJ Dally, ...
IEEE Journal of Solid-State Circuits 58 (4), 1129-1141, 2023
192023
KeyRAM: A 0.34 uJ/decision 18 k decisions/s recurrent attention in-memory processor for keyword spotting
H Dbouk, SK Gonugondla, C Sakr, NR Shanbhag
2020 IEEE Custom Integrated Circuits Conference (CICC), 1-4, 2020
182020
Signal processing methods to enhance the energy efficiency of in-memory computing architectures
C Sakr, NR Shanbhag
IEEE Transactions on Signal Processing 69, 6462-6472, 2021
172021
Fundamental limits on energy-delay-accuracy of in-memory architectures in inference applications
SK Gonugondla, C Sakr, H Dbouk, NR Shanbhag
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2021
172021
Facilitating neural network efficiency
C Jungwook, K Gopalakrishnan, C Sakr, S Venkataramani, Z Wang
US Patent 11,195,096, 2021
72021
Understanding the energy and precision requirements for online learning
C Sakr, A Patil, S Zhang, Y Kim, N Shanbhag
arXiv preprint arXiv:1607.00669, 2016
62016
Vapr: Variable-precision tensors to accelerate robot motion planning
YS Hsiao, SKS Hari, B Sundaralingam, J Yik, T Tambe, C Sakr, ...
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2023
42023
Minimum precision requirements for deep learning with biomedical datasets
C Sakr, N Shanbhag
2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), 1-4, 2018
32018
系统目前无法执行此操作,请稍后再试。
文章 1–20