Optimizing {CNN} model inference on {CPUs}

Y Liu, Y Wang, R Yu, M Li, V Sharma… - 2019 USENIX Annual …, 2019 - usenix.org
The popularity of Convolutional Neural Network (CNN) models and the ubiquity of CPUs
imply that better performance of CNN model inference on CPUs can deliver significant gain …

[HTML][HTML] 语音识别技术的研究进展与展望

王海坤, 潘嘉, 刘聪 - 电信科学, 2018 - infocomm-journal.com
自动语音识别(ASR) 技术的目的是让机器能够“听懂” 人类的语音, 将人类语音信息转化为可读的
文字信息, 是实现人机交互的关键技术, 也是长期以来的研究热点. 最近几年 …

Uncertainty estimation in deep learning with application to spoken language assessment

A Malinin - 2019 - repository.cam.ac.uk
Since convolutional neural networks (CNNs) achieved top performance on the ImageNet
task in 2012, deep learning has become the preferred approach to addressing computer …

A two-timescale duplex neurodynamic approach to biconvex optimization

H Che, J Wang - IEEE Transactions on Neural Networks and …, 2018 - ieeexplore.ieee.org
This paper presents a two-timescale duplex neurodynamic system for constrained biconvex
optimization. The two-timescale duplex neurodynamic system consists of two recurrent …

[PDF][PDF] Recurrent neural network language model adaptation for multi-genre broadcast speech recognition

X Chen, T Tan, X Liu, P Lanchantin, M Wan… - … Annual Conference of …, 2015 - academia.edu
Recurrent neural network language models (RNNLMs) have recently become increasingly
popular for many applications including speech recognition. In previous research RNNLMs …

Bidirectional recurrent neural network language models for automatic speech recognition

E Arisoy, A Sethy, B Ramabhadran… - 2015 IEEE International …, 2015 - ieeexplore.ieee.org
Recurrent neural network language models have enjoyed great success in speech
recognition, partially due to their ability to model longer-distance context than word n-gram …

CUED-RNNLM—An open-source toolkit for efficient training and evaluation of recurrent neural network language models

X Chen, X Liu, Y Qian, MJF Gales… - … on acoustics, speech …, 2016 - ieeexplore.ieee.org
In recent years, recurrent neural network language models (RNNLMs) have become
increasingly popular for a range of applications including speech recognition. However, the …

Recurrent neural network language model training with noise contrastive estimation for speech recognition

X Chen, X Liu, MJF Gales… - 2015 IEEE International …, 2015 - ieeexplore.ieee.org
In recent years recurrent neural network language models (RNNLMs) have been
successfully applied to a range of tasks including speech recognition. However, an …

Feedforward sequential memory networks: A new structure to learn long-term dependency

S Zhang, C Liu, H Jiang, S Wei, L Dai, Y Hu - arXiv preprint arXiv …, 2015 - arxiv.org
In this paper, we propose a novel neural network structure, namely\emph {feedforward
sequential memory networks (FSMN)}, to model long-term dependency in time series …

Training language models for long-span cross-sentence evaluation

K Irie, A Zeyer, R Schlüter, H Ney - 2019 IEEE Automatic …, 2019 - ieeexplore.ieee.org
While recurrent neural networks can motivate cross-sentence language modeling and its
application to automatic speech recognition (ASR), corresponding modifications of the …