关注
Voot Tangkaratt
Voot Tangkaratt
Research scientist at Sony AI
在 sony.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Fast and scalable bayesian deep learning by weight-perturbation in adam
M Khan, D Nielsen, V Tangkaratt, W Lin, Y Gal, A Srivastava
International conference on machine learning, 2611-2620, 2018
2992018
Imitation learning from imperfect demonstration
YH Wu, N Charoenphakdee, H Bao, V Tangkaratt, M Sugiyama
International Conference on Machine Learning, 6818-6827, 2019
1672019
TD-regularized actor-critic methods
S Parisi, V Tangkaratt, J Peters, ME Khan
Machine Learning 108, 1467-1501, 2019
482019
Variational imitation learning with diverse-quality demonstrations
V Tangkaratt, B Han, ME Khan, M Sugiyama
International Conference on Machine Learning, 9407-9417, 2020
462020
Efficient sample reuse in policy gradients with parameter-based exploration
T Zhao, H Hachiya, V Tangkaratt, J Morimoto, M Sugiyama
Neural computation 25 (6), 1512-1547, 2013
462013
Hierarchical reinforcement learning via advantage-weighted information maximization
T Osa, V Tangkaratt, M Sugiyama
arXiv preprint arXiv:1901.01365, 2019
382019
Discovering diverse solutions in deep reinforcement learning by maximizing state–action-based mutual information
T Osa, V Tangkaratt, M Sugiyama
Neural Networks 152, 90-104, 2022
34*2022
Active deep Q-learning with demonstration
SA Chen, V Tangkaratt, HT Lin, M Sugiyama
Machine Learning 109 (9), 1699-1725, 2020
342020
Model-based policy gradients with parameter-based exploration by least-squares conditional density estimation
V Tangkaratt, S Mori, T Zhao, J Morimoto, M Sugiyama
Neural networks 57, 128-140, 2014
342014
Robust imitation learning from noisy demonstrations
V Tangkaratt, N Charoenphakdee, M Sugiyama
arXiv preprint arXiv:2010.10181, 2020
272020
Guide actor-critic for continuous control
V Tangkaratt, A Abdolmaleki, M Sugiyama
arXiv preprint arXiv:1705.07606, 2017
252017
Model-based reinforcement learning with dimension reduction
V Tangkaratt, J Morimoto, M Sugiyama
Neural Networks 84, 1-16, 2016
252016
Policy search with high-dimensional context variables
V Tangkaratt, H Van Hoof, S Parisi, G Neumann, J Peters, M Sugiyama
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
202017
Variational adaptive-Newton method for explorative learning
ME Khan, W Lin, V Tangkaratt, Z Liu, D Nielsen
arXiv preprint arXiv:1711.05560, 2017
192017
Vprop: Variational inference using rmsprop
ME Khan, Z Liu, V Tangkaratt, Y Gal
arXiv preprint arXiv:1712.01038, 2017
172017
Direct conditional probability density estimation with sparse feature selection
M Shiga, V Tangkaratt, M Sugiyama
Machine Learning 100, 161-182, 2015
162015
Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization
V Tangkaratt, N Xie, M Sugiyama
Neural computation 27 (1), 228-254, 2014
142014
Simultaneous Planning for Item Picking and Placing by Deep Reinforcement Learning
T Tanaka, T Kaneko, M Sekine, V Tangkaratt, M Sugiyama
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
132020
Direct estimation of the derivative of quadratic mutual information with application in supervised dimension reduction
V Tangkaratt, H Sasaki, M Sugiyama
Neural Computation 29 (8), 2076-2122, 2017
132017
Meta-model-based meta-policy optimization
T Hiraoka, T Imagawa, V Tangkaratt, T Osa, T Onishi, Y Tsuruoka
Asian Conference on Machine Learning, 129-144, 2021
122021
系统目前无法执行此操作,请稍后再试。
文章 1–20