Domain adaptation via multi-layer transfer learning
Transfer learning, which leverages labeled data in a source domain to train an accurate
classifier for classification tasks in a target domain, has attracted extensive research
interests recently for its effectiveness proven by many studies. Previous approaches adopt a
common strategy that models the shared structure as a bridge across different domains by
reducing distribution divergences. However, those approaches totally ignore specific latent
spaces, which can be utilized to learn non-shared concepts. Only specific latent spaces …
classifier for classification tasks in a target domain, has attracted extensive research
interests recently for its effectiveness proven by many studies. Previous approaches adopt a
common strategy that models the shared structure as a bridge across different domains by
reducing distribution divergences. However, those approaches totally ignore specific latent
spaces, which can be utilized to learn non-shared concepts. Only specific latent spaces …
Abstract
Transfer learning, which leverages labeled data in a source domain to train an accurate classifier for classification tasks in a target domain, has attracted extensive research interests recently for its effectiveness proven by many studies. Previous approaches adopt a common strategy that models the shared structure as a bridge across different domains by reducing distribution divergences. However, those approaches totally ignore specific latent spaces, which can be utilized to learn non-shared concepts. Only specific latent spaces contain specific latent factors, lacking which will lead to ineffective distinct concept learning. Additionally, only learning latent factors in one latent feature space layer may ignore those in the other layers. The missing latent factors may also help us to model the latent structure shared as the bridge. This paper proposes a novel transfer learning method Multi-Layer Transfer Learning (MLTL). MLTL first generates specific latent feature spaces. Second, it combines these specific latent feature spaces with common latent feature space into one latent feature space layer. Third, it generates multiple layers to learn the corresponding distributions on different layers with their pluralism simultaneously. Specifically, the pluralism of the distributions on different layers means that learning the distributions on one layer can help us to learn the distributions on the others. Furthermore, an iterative algorithm based on Non-Negative Matrix Tri-Factorization is proposed to solve the optimization problem. Comprehensive experiments demonstrate that MLTL can significantly outperform the state-of-the-art learning methods on topic and sentiment classification tasks.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果