Hands holding clues for object recognition in teachable machines

K Lee, H Kacorri - Proceedings of the 2019 CHI conference on human …, 2019 - dl.acm.org
Proceedings of the 2019 CHI conference on human factors in computing systems, 2019dl.acm.org
Camera manipulation confounds the use of object recognition applications by blind people.
This is exacerbated when photos from this population are also used to train models, as with
teachable machines, where out-of-frame or partially included objects against cluttered
backgrounds degrade performance. Leveraging prior evidence on the ability of blind people
to coordinate hand movements using proprioception, we propose a deep learning system
that jointly models hand segmentation and object localization for object classification. We …
Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果