An attention-enhanced multi-scale and dual sign language recognition network based on a graph convolution network
Sign language is the most important way of communication for hearing-impaired people.
Research on sign language recognition can help normal people understand sign language.
We reviewed the classic methods of sign language recognition, and the recognition
accuracy is not high enough because of redundant information, human finger occlusion,
motion blurring, the diversified signing styles of different people, and so on. To overcome
these shortcomings, we propose a multi-scale and dual sign language recognition Network …
Research on sign language recognition can help normal people understand sign language.
We reviewed the classic methods of sign language recognition, and the recognition
accuracy is not high enough because of redundant information, human finger occlusion,
motion blurring, the diversified signing styles of different people, and so on. To overcome
these shortcomings, we propose a multi-scale and dual sign language recognition Network …
Sign language is the most important way of communication for hearing-impaired people. Research on sign language recognition can help normal people understand sign language. We reviewed the classic methods of sign language recognition, and the recognition accuracy is not high enough because of redundant information, human finger occlusion, motion blurring, the diversified signing styles of different people, and so on. To overcome these shortcomings, we propose a multi-scale and dual sign language recognition Network (SLR-Net) based on a graph convolutional network (GCN). The original input data was RGB videos. We first extracted the skeleton data from them and then used the skeleton data for sign language recognition. SLR-Net is mainly composed of three sub-modules: multi-scale attention network (MSA), multi-scale spatiotemporal attention network (MSSTA) and attention enhanced temporal convolution network (ATCN). MSA allows the GCN to learn the dependencies between long-distance vertices; MSSTA can directly learn the spatiotemporal features; ATCN allows the GCN network to better learn the long temporal dependencies. The three different attention mechanisms, multi-scale attention mechanism, spatiotemporal attention mechanism, and temporal attention mechanism, are proposed to further improve the robustness and accuracy. Besides, a keyframe extraction algorithm is proposed, which can greatly improve efficiency by sacrificing a little accuracy. Experimental results showed that our method can reach 98.08% accuracy rate in the CSL-500 dataset with a 500-word vocabulary. Even on the challenging dataset DEVISIGN-L with a 2000-word vocabulary, it also reached a 64.57% accuracy rate, outperforming other state-of-the-art sign language recognition methods.
MDPI
以上显示的是最相近的搜索结果。 查看全部搜索结果