Ml mob at semeval-2023 task 5:“breaking news: Our semi-supervised and multi-task learning approach spoils clickbait”

H Sterz, L Bongard, T Werner, C Poth… - Proceedings of the …, 2023 - aclanthology.org
Online articles using striking headlines that promise intriguing information are often used to
attract readers. Most of the time, the information provided in the text is disappointing to the …

Measuring Retrieval Complexity in Question Answering Systems

M Gabburo, NP Jedema, S Garg, LFR Ribeiro… - arXiv preprint arXiv …, 2024 - arxiv.org
In this paper, we investigate which questions are challenging for retrieval-based Question
Answering (QA). We (i) propose retrieval complexity (RC), a novel metric conditioned on the …

CARE: Collaborative AI-Assisted Reading Environment

D Zyska, N Dycke, J Buchmann, I Kuznetsov… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent years have seen impressive progress in AI-assisted writing, yet the developments in
AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI …

NLQxform-UI: A Natural Language Interface for Querying DBLP Interactively

R Wang, Z Zhang, L Rossetto, F Ruosch… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, the DBLP computer science bibliography has been prominently used for
searching scholarly information, such as publications, scholars, and venues. However, its …

UKP-SQuARE v3: A Platform for Multi-Agent QA Research

H Puerto, T Baumgärtner, R Sachdeva, H Fang… - arXiv preprint arXiv …, 2023 - arxiv.org
The continuous development of Question Answering (QA) datasets has drawn the research
community's attention toward multi-domain models. A popular approach is to use multi …

UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA

R Sachdeva, H Puerto, T Baumgärtner… - arXiv preprint arXiv …, 2022 - arxiv.org
Question Answering (QA) systems are increasingly deployed in applications where they
support real-world decisions. However, state-of-the-art models rely on deep neural …

Modular and Parameter-efficient Fine-tuning of Language Models

J Pfeiffer - 2023 - tuprints.ulb.tu-darmstadt.de
Transfer learning has recently become the dominant paradigm of natural language
processing. Models pre-trained on unlabeled data can be fine-tuned for downstream tasks …

[PDF][PDF] Bridging the Gap Between Wikipedians and Scientists with Terminology-Aware Translation: A Case Study in Turkish

GG Şahin - openreview.net
This project addresses the gap between the escalating volume of English-to-Turkish
Wikipedia translations and the insufficient number of contributors, particularly in technical …