Development of a framework for human–robot interactions with Indian sign language using possibility theory

N Baranwal, AK Singh, GC Nandi - International Journal of Social Robotics, 2017 - Springer
International Journal of Social Robotics, 2017Springer
This paper demonstrates the capability of NAO humanoid robot to interact with hearing
impaired persons using Indian Sign Language (ISL). Principal contributions of the paper are:
wavelet descriptor has been applied to extracting moment invariant shape future of hand
gestures and possibility theory (PT) has been used for classification of gestures.
Preprocessing and extraction of overlapping frames (start and end point of each gesture) are
the other major events which have been solved using background modeling and novel …
Abstract
This paper demonstrates the capability of NAO humanoid robot to interact with hearing impaired persons using Indian Sign Language (ISL). Principal contributions of the paper are: wavelet descriptor has been applied to extracting moment invariant shape future of hand gestures and possibility theory (PT) has been used for classification of gestures. Preprocessing and extraction of overlapping frames (start and end point of each gesture) are the other major events which have been solved using background modeling and novel gradient method. We have shown that the overlapping frames are helpful for fragmenting a continuous ISL gesture into isolated gestures. These isolated gestures are further processed and classified. During the segmentation process some of the geometrical features like shape and orientation of hand are deformed, which has been overcome by extracting a new moment invariant feature through wavelet descriptor. These features are then combined with the other two features (orientation and speed) and are classified using PT. Here we use PT in place of probability theory because possibility deals with the problem of uncertainty and impression whereas probability handles only uncertainty issues. Experiments have been performed on 20 sentences of continuous ISL gestures having 4000 samples where each sentence having 20 instances. In this dataset 50% samples have been used for training and 50% have been used for testing. From analysis of results we found that the proposed approach gives 92% classification accuracy with 20 subjects on continuous ISL gestures. This result has been compared with the results obtained with other classifiers like Hidden Markov Model and KNN and found 10% increment in the classification rate with proposed approach. These classified gestures are then combined for generating a sentence in the text format which is being matched with the knowledge database of NAO robot which has also increased the classification accuracy. These sentences are further converted in the form of speech or gesture using NAO robot.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果