Recurrent neural networks for language understanding. K Yao, G Zweig, MY Hwang, Y Shi, D Yu In Fourteenth Annual Conference of the International Speech Communication …, 2013 | 411 | 2013 |
Spoken language understanding using long short-term memory neural networks K Yao, B Peng, Y Zhang, D Yu, G Zweig, Y Shi 2014 IEEE Spoken Language Technology Workshop (SLT), 189-194, 2014 | 402 | 2014 |
Torchaudio: Building blocks for audio and speech processing YY Yang, M Hira, Z Ni, A Astafurov, C Chen, C Puhrsch, D Pollack, ... ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022 | 170 | 2022 |
Emformer: Efficient memory transformer based acoustic model for low latency streaming speech recognition Y Shi, Y Wang, C Wu, CF Yeh, J Chan, F Zhang, D Le, M Seltzer ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 162 | 2021 |
Llm-qat: Data-free quantization aware training for large language models Z Liu, B Oguz, C Zhao, E Chang, P Stock, Y Mehdad, Y Shi, ... arXiv preprint arXiv:2305.17888, 2023 | 108 | 2023 |
Contextual Spoken Language Understanding Using Recurrent Neural Networks Y Shi, H Yao, Kaisheng, Chen, YC Pan, MY Hwang, B Peng IEEE International Conference on Acoustics, Speech and Signal Processing, 2015 | 90 | 2015 |
Contextualized streaming end-to-end speech recognition with trie-based deep biasing and shallow fusion D Le, M Jain, G Keren, S Kim, Y Shi, J Mahadeokar, J Chan, ... arXiv preprint arXiv:2104.02194, 2021 | 72 | 2021 |
Deep lstm based feature mapping for query classification Y Shi, K Yao, L Tian, D Jiang Proceedings of the 2016 Conference of the North American Chapter of the …, 2016 | 71 | 2016 |
Streaming transformer-based acoustic models using self-attention with augmented memory C Wu, Y Wang, Y Shi, CF Yeh, F Zhang arXiv preprint arXiv:2005.08042, 2020 | 67 | 2020 |
Recurrent neural network language model adaptation with curriculum learning Y Shi, M Larson, CM Jonker Computer Speech & Language 33 (1), 136-154, 2015 | 49 | 2015 |
Towards Recurrent Neural Networks Language Models with Linguistic and Contextual Features. Y Shi, P Wiggers, CM Jonker Interspeech 12, 1664-1667, 2012 | 49 | 2012 |
Weak-attention suppression for transformer based speech recognition Y Shi, Y Wang, C Wu, C Fuegen, F Zhang, D Le, CF Yeh, ML Seltzer arXiv preprint arXiv:2005.09137, 2020 | 28 | 2020 |
Knowledge distillation for recurrent neural network language modeling with trust regularization Y Shi, MY Hwang, X Lei, H Sheng ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019 | 26 | 2019 |
Mining effective negative training samples for keyword spotting J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 25 | 2020 |
Dissecting user-perceived latency of on-device E2E speech recognition Y Shangguan, R Prabhavalkar, H Su, J Mahadeokar, Y Shi, J Zhou, C Wu, ... arXiv preprint arXiv:2104.02207, 2021 | 24 | 2021 |
Higher order iteration schemes for unconstrained optimization Y Shi, P Pan American Journal of Operations Research 1 (03), 73, 2011 | 24 | 2011 |
Region proposal network based small-footprint keyword spotting J Hou, Y Shi, M Ostendorf, MY Hwang, L Xie IEEE Signal Processing Letters 26 (10), 1471-1475, 2019 | 23 | 2019 |
Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications Y Wang, Y Shi, F Zhang, C Wu, J Chan, CF Yeh, A Xiao ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 18 | 2021 |
Evaluations of interventions using mathematical models with exponential and non-exponential distributions for disease stages: the case of Ebola X Wang, Y Shi, Z Feng, J Cui Bulletin of mathematical biology 79, 2149-2173, 2017 | 18 | 2017 |
Mobilellm: Optimizing sub-billion parameter language models for on-device use cases Z Liu, C Zhao, F Iandola, C Lai, Y Tian, I Fedorov, Y Xiong, E Chang, ... arXiv preprint arXiv:2402.14905, 2024 | 15 | 2024 |