作者
Akiko Yamazaki, Keiichi Yamazaki, Yoshinori Kuno, Matthew Burdelski, Michie Kawashima, Hideaki Kuzuoka
发表日期
2008/4/6
图书
Proceedings of the SIGCHI conference on human factors in computing systems
页码范围
131-140
简介
As research over the last several decades has shown that non-verbal actions such as face and head movement play a crucial role in human interaction, such resources are also likely to play an important role in human-robot interaction. In developing a robotic system that employs embodied resources such as face and head movement, we cannot simply program the robot to move at random but rather we need to consider the ways these actions may be timed to specific points in the talk. This paper discusses our work in developing a museum guide robot that moves its head at interactionally significant points during its explanation of an exhibit. In order to proceed, we first examined the coordination of verbal and non-verbal actions in human guide-visitor interaction. Based on this analysis, we developed a robot that moves its head at interactionally significant points in its talk. We then conducted several experiments to …
引用总数
20082009201020112012201320142015201620172018201920202021202220232024515139915191191014126410134
学术搜索中的文章
A Yamazaki, K Yamazaki, Y Kuno, M Burdelski… - Proceedings of the SIGCHI conference on human …, 2008