作者
Harlan D Harris
发表日期
2002/8/19
图书
European Conference on Machine Learning
页码范围
135-147
出版商
Springer Berlin Heidelberg
简介
The Winnow class of on-line linear learning algorithms [10],[11] was designed to be attribute-efficient. When learning with many irrelevant attributes, Winnow makes a number of errors that is only logarithmic in the number of total attributes, compared to the Perceptron algorithm, which makes a nearly linear number of errors. This paper presents data that argues that the Incremental Delta-Bar-Delta (IDBD) second-order gradient-descent algorithm [14] is attribute-efficient, performs similarly to Winnow on tasks with many irrelevant attributes, and also does better than Winnow on a task where Winnow does poorly. Preliminary analysis supports this empirical claim by showing that IDBD, like Winnow and other attribute-efficient algorithms, and unlike the Perceptron algorithm, has weights that can grow exponentially quickly. By virtue of its more flexible approach to weight updates, however, IDBD may be a more …
学术搜索中的文章
HD Harris - European Conference on Machine Learning, 2002