作者
Yanjun Li, Shujian Yu, Jose C Principe, Xiaolin Li, Dapeng Wu
发表日期
2020/7/13
期刊
arXiv preprint arXiv:2007.06503
简介
Although substantial efforts have been made to learn disentangled representations under the variational autoencoder (VAE) framework, the fundamental properties to the dynamics of learning of most VAE models still remain unknown and under-investigated. In this work, we first propose a novel learning objective, termed the principle-of-relevant-information variational autoencoder (PRI-VAE), to learn disentangled representations. We then present an information-theoretic perspective to analyze existing VAE models by inspecting the evolution of some critical information-theoretic quantities across training epochs. Our observations unveil some fundamental properties associated with VAEs. Empirical results also demonstrate the effectiveness of PRI-VAE on four benchmark data sets.
引用总数
学术搜索中的文章
Y Li, S Yu, JC Principe, X Li, D Wu - arXiv preprint arXiv:2007.06503, 2020