作者
Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham Kakade, Matus Telgarsky
发表日期
2014
期刊
The Journal of Machine Learning Research
卷号
15
期号
1
页码范围
2773-2832
简介
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their low-order observable moments (typically, of second-and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin’s perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models. c 2014 Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky.
引用总数
20132014201520162017201820192020202120222023202427739211415415712513212915813459
学术搜索中的文章
A Anandkumar, R Ge, D Hsu, SM Kakade, M Telgarsky - … Learning Theory: 26th International Conference, ALT …, 2015
A Anandkumar, R Ge, D Hsu, SM Kakade, M Telgarsky - Under Review. J. of Machine Learning. Available at, 2012
A Anandkumar, R Ge, D Hsu, SM Kakade, M Telgarsky - arXiv preprint arXiv:1210.7559