Interactive natural language processing

Z Wang, G Zhang, K Yang, N Shi, W Zhou… - arXiv preprint arXiv …, 2023 - arxiv.org
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …

Dune: Dataset for unified editing

AF Akyürek, E Pan, G Kuwanto, D Wijaya - arXiv preprint arXiv …, 2023 - arxiv.org
Even the most advanced language models remain susceptible to errors necessitating to
modify these models without initiating a comprehensive retraining process. Model editing …

Discoprompt: Path prediction prompt tuning for implicit discourse relation recognition

C Chan, X Liu, J Cheng, Z Li, Y Song, GY Wong… - arXiv preprint arXiv …, 2023 - arxiv.org
Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to
recognize the discourse relations between the arguments with the absence of discourse …

Interactive question answering systems: Literature review

GM Biancofiore, Y Deldjoo, TD Noia… - ACM Computing …, 2024 - dl.acm.org
Question-answering systems are recognized as popular and frequently effective means of
information seeking on the web. In such systems, information seekers can receive a concise …

Unifiedabsa: A unified absa framework based on multi-task instruction tuning

Z Wang, R Xia, J Yu - arXiv preprint arXiv:2211.10986, 2022 - arxiv.org
Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspect-level
sentiment information. There are many ABSA tasks, and the current dominant paradigm is to …

Zero-Shot Learners for Natural Language Understanding via a Unified Multiple-Choice Perspective

J Wang, P Yang, R Gan, Y Zhang, J Zhang… - IEEE Access, 2023 - ieeexplore.ieee.org
Zero-shot learning is an approach where models generalize to unseen tasks without direct
training on them. We introduce the Unified Multiple-Choice (UniMC) framework, which is …

GAP: A novel Generative context-Aware Prompt-tuning method for relation extraction

Z Chen, Z Li, Y Zeng, C Zhang, H Ma - Expert Systems with Applications, 2024 - Elsevier
Prompt-tuning was proposed to bridge the gap between pretraining and downstream tasks,
and it has achieved promising results in Relation Extraction (RE). Although the existing …

Domain incremental lifelong learning in an open world

Y Dai, H Lang, Y Zheng, B Yu, F Huang, Y Li - arXiv preprint arXiv …, 2023 - arxiv.org
Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously.
Architecture-based approaches are reported to be effective implementations for LL models …

Drlk: dynamic hierarchical reasoning with language model and knowledge graph for question answering

M Zhang, R Dai, M Dong, T He - Proceedings of the 2022 …, 2022 - aclanthology.org
Abstract In recent years, Graph Neural Network (GNN) approaches with enhanced
knowledge graphs (KG) perform well in question answering (QA) tasks. One critical …

[HTML][HTML] Improving task generalization via unified schema prompt

W Zhong, Y Gao, N Ding, Z Liu, M Zhou, J Wang, J Yin… - AI Open, 2023 - Elsevier
Task generalization has been a long-standing challenge in Natural Language Processing
(NLP). Recent research attempts to improve the task generalization ability of pre-trained …