Korean sign language recognition using EMG and IMU sensors based on group-dependent NN models
2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017•ieeexplore.ieee.org
Automatic sign language recognition systems can help many hearing and speech-impaired
people communicate with the public. To recognize sign language, the system should first
determine the shape of the hand and the movement of the arm. Since sign language
consists of a sequence of movements, it is difficult to distinguish a certain gesture from
gestures (movements). It also has various lengths of gestures. It is effective to make the fixed
length input data (gestures) rather than predefine the length of each gesture for recognition …
people communicate with the public. To recognize sign language, the system should first
determine the shape of the hand and the movement of the arm. Since sign language
consists of a sequence of movements, it is difficult to distinguish a certain gesture from
gestures (movements). It also has various lengths of gestures. It is effective to make the fixed
length input data (gestures) rather than predefine the length of each gesture for recognition …
Automatic sign language recognition systems can help many hearing and speech-impaired people communicate with the public. To recognize sign language, the system should first determine the shape of the hand and the movement of the arm. Since sign language consists of a sequence of movements, it is difficult to distinguish a certain gesture from gestures (movements). It also has various lengths of gestures. It is effective to make the fixed length input data (gestures) rather than predefine the length of each gesture for recognition. Furthermore, in order to improve recognition accuracy, the effective way is to exploit multiple heterogeneous sensors (both an electromyography (EMG) sensor and an inertial measurement unit (IMU) sensor) which can produce the redundant information to the same physical variable. Specially, we focus on the fact that EMG signals depends on physical features of people because the amount of muscle and the thickness of the fat layer are different for each person. To address these issues, we propose an automatic recognition method for Korean sign language, which based on a sensor fusion technology and group-dependent Neural Network (NN) models. Our approach on group-dependent NN models is to separate the models so that different people can use different models. Finally, the results of recognition show that the proposed method has high accuracy (99.13% of CNN without dropout and 98.1% of CNN with dropout).
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果