Low-cost evaluation techniques for information retrieval systems: A review

SI Moghadasi, SD Ravana, SN Raman - Journal of Informetrics, 2013 - Elsevier
For a system-based information retrieval evaluation, test collection model still remains as a
costly task. Producing relevance judgments is an expensive, time consuming task which has …

Reliable Information Retrieval Systems Performance Evaluation: A Review

MH Joseph, SD Ravana - IEEE Access, 2024 - ieeexplore.ieee.org
With the progressive and availability of various search tools, interest in the evaluation of
information retrieval based on user perspective has grown tremendously among …

On using fewer topics in information retrieval evaluations

A Berto, S Mizzaro, S Robertson - … of the 2013 Conference on the Theory …, 2013 - dl.acm.org
The possibility of using fewer topics in TREC, and in TREC-like initiatives, has been studied
recently, with encouraging results: even when decreasing consistently the number of topics …

DiffIR: Exploring differences in ranking models' behavior

KM Jose, T Nguyen, S MacAvaney, J Dalton… - Proceedings of the 44th …, 2021 - dl.acm.org
Understanding and comparing the behavior of retrieval models is a fundamental challenge
that requires going beyond examining average effectiveness and per-query metrics …

An uncertainty-aware query selection model for evaluation of IR systems

M Hosseini, IJ Cox, N Milic-Frayling… - Proceedings of the 35th …, 2012 - dl.acm.org
We propose a mathematical framework for query selection as a mechanism for reducing the
cost of constructing information retrieval test collections. In particular, our mathematical …

Intelligent topic selection for low-cost information retrieval evaluation: A New perspective on deep vs. shallow judging

M Kutlu, T Elsayed, M Lease - Information Processing & Management, 2018 - Elsevier
While test collections provide the cornerstone for Cranfield-based evaluation of information
retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling …

Fewer topics? A million topics? Both?! On topics subsets in test collections

K Roitero, JS Culpepper, M Sanderson… - Information Retrieval …, 2020 - Springer
When evaluating IR run effectiveness using a test collection, a key question is: What search
topics should be used? We explore what happens to measurement accuracy when the …

A short survey on online and offline methods for search quality evaluation

E Kanoulas - Russian Summer School in Information Retrieval, 2015 - Springer
Abstract Evaluation has always been the cornerstone of scientific development. Scientists
come up with hypotheses (models) to explain physical phenomena, and validate these …

Correlation, prediction and ranking of evaluation metrics in information retrieval

S Gupta, M Kutlu, V Khetan, M Lease - … 14–18, 2019, Proceedings, Part I …, 2019 - Springer
Given limited time and space, IR studies often report few evaluation metrics which must be
carefully selected. To inform such selection, we first quantify correlation between 23 popular …

Text search of surnames in some slavic and other morphologically rich languages using rule based phonetic algorithms

D Zahoranský, I Polasek - IEEE/ACM Transactions on Audio …, 2015 - ieeexplore.ieee.org
Surnames play a key role as person natural identifiers, essentially in present information
systems. This paper deals with the topic of optimizing a phonetic search algorithm as a string …