Large-scale distance metric learning for k-nearest neighbors regression

B Nguyen, C Morell, B De Baets - Neurocomputing, 2016 - Elsevier
Neurocomputing, 2016Elsevier
This paper presents a distance metric learning method for k-nearest neighbors regression.
We define the constraints based on triplets, which are built from the neighborhood of each
training instance, to learn the distance metric. The resulting optimization problem can be
formulated as a convex quadratic program. Quadratic programming has a disadvantage that
it does not scale well in large-scale settings. To reduce the time complexity of training, we
propose a novel dual coordinate descent method for this type of problem. Experimental …
Abstract
This paper presents a distance metric learning method for k-nearest neighbors regression. We define the constraints based on triplets, which are built from the neighborhood of each training instance, to learn the distance metric. The resulting optimization problem can be formulated as a convex quadratic program. Quadratic programming has a disadvantage that it does not scale well in large-scale settings. To reduce the time complexity of training, we propose a novel dual coordinate descent method for this type of problem. Experimental results on several regression data sets show that our method obtains a competitive performance when compared with the state-of-the-art distance metric learning methods, while being an order of magnitude faster.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果