Audio-visual speech recognition using deep learning K Noda, Y Yamaguchi, K Nakadai, HG Okuno, T Ogata Applied intelligence 42, 722-737, 2015 | 675 | 2015 |
Repeatable folding task by humanoid robot worker using deep learning PC Yang, K Sasaki, K Suzuki, K Kase, S Sugano, T Ogata IEEE Robotics and Automation Letters 2 (2), 397-403, 2016 | 267 | 2016 |
Hybrid collaborative and content-based music recommendation using probabilistic model with latent user preferences K Yoshii, M Goto, K Komatani, T Ogata, HG Okuno Proc. of the International Conference on Music Information Retrieval (ISMIR), 2006 | 242 | 2006 |
An efficient hybrid music recommender system using an incrementally trainable probabilistic generative model K Yoshii, M Goto, K Komatani, T Ogata, HG Okuno Audio, Speech, and Language Processing, IEEE Transactions on 16 (2), 435-447, 2008 | 224 | 2008 |
Multimodal integration learning of robot behavior using deep neural networks K Noda, H Arie, Y Suga, T Ogata Robotics and Autonomous Systems 62 (6), 721-736, 2014 | 215 | 2014 |
Symbol emergence in robotics: a survey T Taniguchi, T Nagai, T Nakamura, N Iwahashi, T Ogata, H Asoh Advanced Robotics 30 (11-12), 706-728, 2016 | 165 | 2016 |
Lipreading using convolutional neural network. K Noda, Y Yamaguchi, K Nakadai, HG Okuno, T Ogata Interspeech 1, 3, 2014 | 165 | 2014 |
Sound source localization using deep learning models N Yalta, K Nakadai, T Ogata Journal of Robotics and Mechatronics 29 (1), 37-48, 2017 | 137 | 2017 |
Tactile object recognition using deep learning and dropout A Schmitz, Y Bansho, K Noda, H Iwata, T Ogata, S Sugano 2014 IEEE-RAS International Conference on Humanoid Robots, 1044-1050, 2014 | 123 | 2014 |
Instrument identification in polyphonic music: Feature weighting to minimize influence of sound overlaps T Kitahara, M Goto, K Komatani, T Ogata, HG Okuno EURASIP Journal on Advances in Signal Processing 2007, 1-15, 2006 | 116 | 2006 |
Automatic synchronization between lyrics and music CD recordings based on Viterbi alignment of segregated vocal signals H Fujihara, M Goto, J Ogata, K Komatani, T Ogata, HG Okuno Eighth IEEE International Symposium on Multimedia (ISM'06), 257-264, 2006 | 106 | 2006 |
Singer identification based on accompaniment sound reduction and reliable frame selection H Fujihara, T Kitahara, M Goto, K Komatani, T Ogata, HG Okuno Proc. ISMIR, 329-336, 2005 | 91 | 2005 |
Emergence of mind in robots for Human Interface-research methodology and robot model S Sugano, T Ogata Robotics and Automation, 1996. Proceedings., 1996 IEEE International …, 1996 | 90 | 1996 |
Enhanced robot speech recognition based on microphone array source separation and missing feature theory S Yamamoto, JM Valin, K Nakadai, J Rouat, F Michaud, T Ogata, ... Proceedings of the 2005 IEEE International Conference on Robotics and …, 2005 | 89 | 2005 |
Two-way translation of compound sentences and arm motions by recurrent neural networks T Ogata, M Murase, J Tani, K Komatani, HG Okuno 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2007 | 85 | 2007 |
Cheek to chip: Dancing robots and AI's future JJ Aucouturier, K Ikeuchi, H Hirukawa, S Nakaoka, T Shiratori, S Kudoh, ... IEEE Intelligent Systems 23 (2), 74-84, 2008 | 83 | 2008 |
Automatic chord transcription with concurrent recognition of chord symbols and boundaries T Yoshioka, T Kitahara, K Komatani, T Ogata, HG Okuno Proceedings of the 5th International Conference on Music Information …, 2004 | 77 | 2004 |
Emotional communication between humans and the autonomous robot which has the emotion model T Ogata, S Sugano Robotics and Automation, 1999. Proceedings. 1999 IEEE International …, 1999 | 76 | 1999 |
Real-time robot audition system that recognizes simultaneous speech in the real world S Yamamoto, K Nakadai, M Nakano, H Tsujino, JM Valin, K Komatani, ... 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2006 | 75 | 2006 |
Paired recurrent autoencoders for bidirectional translation between robot actions and linguistic descriptions T Yamada, H Matsunaga, T Ogata IEEE Robotics and Automation Letters 3 (4), 3441-3448, 2018 | 73 | 2018 |