Effects of distance measure choice on k-nearest neighbor classifier performance: a review
HA Abu Alfeilat, ABA Hassanat, O Lasassmeh… - Big data, 2019 - liebertpub.com
The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers,
yet its performance competes with the most complex classifiers in the literature. The core of …
yet its performance competes with the most complex classifiers in the literature. The core of …
Distance and similarity measures effect on the performance of K-nearest neighbor classifier--a review
VB Prasath, HAA Alfeilat, A Hassanat… - arXiv preprint arXiv …, 2017 - arxiv.org
The K-nearest neighbor (KNN) classifier is one of the simplest and most common classifiers,
yet its performance competes with the most complex classifiers in the literature. The core of …
yet its performance competes with the most complex classifiers in the literature. The core of …
Fine-grained visual comparisons with local learning
Given two images, we want to predict which exhibits a particular visual attribute more than
the other---even when the two images are quite similar. Existing relative attribute methods …
the other---even when the two images are quite similar. Existing relative attribute methods …
Learning to rank for information retrieval
TY Liu - Foundations and Trends® in Information Retrieval, 2009 - nowpublishers.com
Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking
model using training data, such that the model can sort new objects according to their …
model using training data, such that the model can sort new objects according to their …
Learning a deep listwise context model for ranking refinement
Learning to rank has been intensively studied and widely applied in information retrieval.
Typically, a global ranking function is learned from a set of labeled data, which can achieve …
Typically, a global ranking function is learned from a set of labeled data, which can achieve …
Preference learning and ranking by pairwise comparison
J Fürnkranz, E Hüllermeier - Preference learning, 2010 - Springer
This chapter provides an overview of recent work on preference learning and ranking via
pairwise classification. The learning by pairwise comparison (LPC) paradigm is the natural …
pairwise classification. The learning by pairwise comparison (LPC) paradigm is the natural …
LETOR: A benchmark collection for research on learning to rank for information retrieval
LETOR is a benchmark collection for the research on learning to rank for information
retrieval, released by Microsoft Research Asia. In this paper, we describe the details of the …
retrieval, released by Microsoft Research Asia. In this paper, we describe the details of the …
Efficient hyperparameter tuning with grid search for text categorization using kNN approach with BM25 similarity
In machine learning, hyperparameter tuning is the problem of choosing a set of optimal
hyperparameters for a learning algorithm. Several approaches have been widely adopted …
hyperparameters for a learning algorithm. Several approaches have been widely adopted …
Search result diversification
Ranking in information retrieval has been traditionally approached as a pursuit of relevant
information, under the assumption that the users' information needs are unambiguously …
information, under the assumption that the users' information needs are unambiguously …
Robust Distance Measures for kNN Classification of Cancer Data
R Ehsani, F Drabløs - Cancer informatics, 2020 - journals.sagepub.com
The k-Nearest Neighbor (k NN) classifier represents a simple and very general approach to
classification. Still, the performance of k NN classifiers can often compete with more complex …
classification. Still, the performance of k NN classifiers can often compete with more complex …