Learning gestures for customizable human-computer interaction in the operating room
LA Schwarz, A Bigdelou, N Navab - … 18-22, 2011, Proceedings, Part I 14, 2011 - Springer
LA Schwarz, A Bigdelou, N Navab
Medical Image Computing and Computer-Assisted Intervention–MICCAI 2011: 14th …, 2011•SpringerInteraction with computer-based medical devices in the operating room is often challenging
for surgeons due to sterility requirements and the complexity of interventional procedures.
Typical solutions, such as delegating the interaction task to an assistant, can be inefficient.
We propose a method for gesture-based interaction in the operating room that surgeons can
customize to personal requirements and interventional workflow. Given training examples
for each desired gesture, our system learns low-dimensional manifold models that enable …
for surgeons due to sterility requirements and the complexity of interventional procedures.
Typical solutions, such as delegating the interaction task to an assistant, can be inefficient.
We propose a method for gesture-based interaction in the operating room that surgeons can
customize to personal requirements and interventional workflow. Given training examples
for each desired gesture, our system learns low-dimensional manifold models that enable …
Abstract
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon’s movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果