Top-down regularization of deep belief networks

H Goh, N Thome, M Cord… - Advances in neural …, 2013 - proceedings.neurips.cc
Advances in neural information processing systems, 2013proceedings.neurips.cc
Designing a principled and effective algorithm for learning deep architectures is a
challenging problem. The current approach involves two training phases: a fully
unsupervised learning followed by a strongly discriminative optimization. We suggest a
deep learning strategy that bridges the gap between the two phases, resulting in a three-
phase learning procedure. We propose to implement the scheme using a method to
regularize deep belief networks with top-down information. The network is constructed from …
Abstract
Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used. Experiments on the MNIST dataset show improvements over the existing algorithms for deep belief networks. Object recognition results on the Caltech-101 dataset also yield competitive results.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果