Promptagator: Few-shot dense retrieval from 8 examples
Much recent research on information retrieval has focused on how to transfer from one task
(typically with abundant supervised data) to various other tasks where supervision is limited …
(typically with abundant supervised data) to various other tasks where supervision is limited …
Rankt5: Fine-tuning t5 for text ranking with ranking losses
Pretrained language models such as BERT have been shown to be exceptionally effective
for text ranking. However, there are limited studies on how to leverage more powerful …
for text ranking. However, there are limited studies on how to leverage more powerful …
Task-aware retrieval with instructions
We study the problem of retrieval with instructions, where users of a retrieval system
explicitly describe their intent along with their queries. We aim to develop a general-purpose …
explicitly describe their intent along with their queries. We aim to develop a general-purpose …
A survey of text classification with transformers: How wide? how large? how long? how accurate? how expensive? how safe?
J Fields, K Chovanec, P Madiraju - IEEE Access, 2024 - ieeexplore.ieee.org
Text classification in natural language processing (NLP) is evolving rapidly, particularly with
the surge in transformer-based models, including large language models (LLM). This paper …
the surge in transformer-based models, including large language models (LLM). This paper …
Exaranker: Synthetic explanations improve neural rankers
Recent work has shown that incorporating explanations into the output generated by large
language models (LLMs) can significantly enhance performance on a broad spectrum of …
language models (LLMs) can significantly enhance performance on a broad spectrum of …
Exaranker: Explanation-augmented neural ranker
Recent work has shown that inducing a large language model (LLM) to generate
explanations prior to outputting an answer is an effective strategy to improve performance on …
explanations prior to outputting an answer is an effective strategy to improve performance on …
Neuralmind-unicamp at 2022 trec neuclir: Large boring rerankers for cross-lingual retrieval
This paper reports on a study of cross-lingual information retrieval (CLIR) using the mT5-
XXL reranker on the NeuCLIR track of TREC 2022. Perhaps the biggest contribution of this …
XXL reranker on the NeuCLIR track of TREC 2022. Perhaps the biggest contribution of this …
Scaling laws for dense retrieval
Scaling laws have been observed in a wide range of tasks, particularly in language
generation. Previous studies have found that the performance of large language models …
generation. Previous studies have found that the performance of large language models …
Inpars toolkit: A unified and reproducible synthetic data generation pipeline for neural information retrieval
Recent work has explored Large Language Models (LLMs) to overcome the lack of training
data for Information Retrieval (IR) tasks. The generalization abilities of these models have …
data for Information Retrieval (IR) tasks. The generalization abilities of these models have …
Cross-lingual open-domain question answering with answer sentence generation
B Muller, L Soldaini, R Koncel-Kedziorski… - arXiv preprint arXiv …, 2021 - arxiv.org
Open-Domain Generative Question Answering has achieved impressive performance in
English by combining document-level retrieval with answer generation. These approaches …
English by combining document-level retrieval with answer generation. These approaches …