Ml mob at semeval-2023 task 5:“breaking news: Our semi-supervised and multi-task learning approach spoils clickbait”
Online articles using striking headlines that promise intriguing information are often used to
attract readers. Most of the time, the information provided in the text is disappointing to the …
attract readers. Most of the time, the information provided in the text is disappointing to the …
Measuring Retrieval Complexity in Question Answering Systems
In this paper, we investigate which questions are challenging for retrieval-based Question
Answering (QA). We (i) propose retrieval complexity (RC), a novel metric conditioned on the …
Answering (QA). We (i) propose retrieval complexity (RC), a novel metric conditioned on the …
CARE: Collaborative AI-Assisted Reading Environment
Recent years have seen impressive progress in AI-assisted writing, yet the developments in
AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI …
AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI …
NLQxform-UI: A Natural Language Interface for Querying DBLP Interactively
In recent years, the DBLP computer science bibliography has been prominently used for
searching scholarly information, such as publications, scholars, and venues. However, its …
searching scholarly information, such as publications, scholars, and venues. However, its …
UKP-SQuARE v3: A Platform for Multi-Agent QA Research
The continuous development of Question Answering (QA) datasets has drawn the research
community's attention toward multi-domain models. A popular approach is to use multi …
community's attention toward multi-domain models. A popular approach is to use multi …
UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA
Question Answering (QA) systems are increasingly deployed in applications where they
support real-world decisions. However, state-of-the-art models rely on deep neural …
support real-world decisions. However, state-of-the-art models rely on deep neural …
Modular and Parameter-efficient Fine-tuning of Language Models
J Pfeiffer - 2023 - tuprints.ulb.tu-darmstadt.de
Transfer learning has recently become the dominant paradigm of natural language
processing. Models pre-trained on unlabeled data can be fine-tuned for downstream tasks …
processing. Models pre-trained on unlabeled data can be fine-tuned for downstream tasks …
[PDF][PDF] Bridging the Gap Between Wikipedians and Scientists with Terminology-Aware Translation: A Case Study in Turkish
GG Şahin - openreview.net
This project addresses the gap between the escalating volume of English-to-Turkish
Wikipedia translations and the insufficient number of contributors, particularly in technical …
Wikipedia translations and the insufficient number of contributors, particularly in technical …