Optimizing {CNN} model inference on {CPUs}
The popularity of Convolutional Neural Network (CNN) models and the ubiquity of CPUs
imply that better performance of CNN model inference on CPUs can deliver significant gain …
imply that better performance of CNN model inference on CPUs can deliver significant gain …
[HTML][HTML] 语音识别技术的研究进展与展望
王海坤, 潘嘉, 刘聪 - 电信科学, 2018 - infocomm-journal.com
自动语音识别(ASR) 技术的目的是让机器能够“听懂” 人类的语音, 将人类语音信息转化为可读的
文字信息, 是实现人机交互的关键技术, 也是长期以来的研究热点. 最近几年 …
文字信息, 是实现人机交互的关键技术, 也是长期以来的研究热点. 最近几年 …
Uncertainty estimation in deep learning with application to spoken language assessment
A Malinin - 2019 - repository.cam.ac.uk
Since convolutional neural networks (CNNs) achieved top performance on the ImageNet
task in 2012, deep learning has become the preferred approach to addressing computer …
task in 2012, deep learning has become the preferred approach to addressing computer …
A two-timescale duplex neurodynamic approach to biconvex optimization
This paper presents a two-timescale duplex neurodynamic system for constrained biconvex
optimization. The two-timescale duplex neurodynamic system consists of two recurrent …
optimization. The two-timescale duplex neurodynamic system consists of two recurrent …
[PDF][PDF] Recurrent neural network language model adaptation for multi-genre broadcast speech recognition
Recurrent neural network language models (RNNLMs) have recently become increasingly
popular for many applications including speech recognition. In previous research RNNLMs …
popular for many applications including speech recognition. In previous research RNNLMs …
Bidirectional recurrent neural network language models for automatic speech recognition
Recurrent neural network language models have enjoyed great success in speech
recognition, partially due to their ability to model longer-distance context than word n-gram …
recognition, partially due to their ability to model longer-distance context than word n-gram …
CUED-RNNLM—An open-source toolkit for efficient training and evaluation of recurrent neural network language models
In recent years, recurrent neural network language models (RNNLMs) have become
increasingly popular for a range of applications including speech recognition. However, the …
increasingly popular for a range of applications including speech recognition. However, the …
Recurrent neural network language model training with noise contrastive estimation for speech recognition
In recent years recurrent neural network language models (RNNLMs) have been
successfully applied to a range of tasks including speech recognition. However, an …
successfully applied to a range of tasks including speech recognition. However, an …
Feedforward sequential memory networks: A new structure to learn long-term dependency
In this paper, we propose a novel neural network structure, namely\emph {feedforward
sequential memory networks (FSMN)}, to model long-term dependency in time series …
sequential memory networks (FSMN)}, to model long-term dependency in time series …
Training language models for long-span cross-sentence evaluation
While recurrent neural networks can motivate cross-sentence language modeling and its
application to automatic speech recognition (ASR), corresponding modifications of the …
application to automatic speech recognition (ASR), corresponding modifications of the …