Interactive natural language processing
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
Dune: Dataset for unified editing
Even the most advanced language models remain susceptible to errors necessitating to
modify these models without initiating a comprehensive retraining process. Model editing …
modify these models without initiating a comprehensive retraining process. Model editing …
Discoprompt: Path prediction prompt tuning for implicit discourse relation recognition
Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to
recognize the discourse relations between the arguments with the absence of discourse …
recognize the discourse relations between the arguments with the absence of discourse …
Interactive question answering systems: Literature review
Question-answering systems are recognized as popular and frequently effective means of
information seeking on the web. In such systems, information seekers can receive a concise …
information seeking on the web. In such systems, information seekers can receive a concise …
Unifiedabsa: A unified absa framework based on multi-task instruction tuning
Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained aspect-level
sentiment information. There are many ABSA tasks, and the current dominant paradigm is to …
sentiment information. There are many ABSA tasks, and the current dominant paradigm is to …
Zero-Shot Learners for Natural Language Understanding via a Unified Multiple-Choice Perspective
Zero-shot learning is an approach where models generalize to unseen tasks without direct
training on them. We introduce the Unified Multiple-Choice (UniMC) framework, which is …
training on them. We introduce the Unified Multiple-Choice (UniMC) framework, which is …
GAP: A novel Generative context-Aware Prompt-tuning method for relation extraction
Prompt-tuning was proposed to bridge the gap between pretraining and downstream tasks,
and it has achieved promising results in Relation Extraction (RE). Although the existing …
and it has achieved promising results in Relation Extraction (RE). Although the existing …
Domain incremental lifelong learning in an open world
Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously.
Architecture-based approaches are reported to be effective implementations for LL models …
Architecture-based approaches are reported to be effective implementations for LL models …
Drlk: dynamic hierarchical reasoning with language model and knowledge graph for question answering
Abstract In recent years, Graph Neural Network (GNN) approaches with enhanced
knowledge graphs (KG) perform well in question answering (QA) tasks. One critical …
knowledge graphs (KG) perform well in question answering (QA) tasks. One critical …
[HTML][HTML] Improving task generalization via unified schema prompt
Task generalization has been a long-standing challenge in Natural Language Processing
(NLP). Recent research attempts to improve the task generalization ability of pre-trained …
(NLP). Recent research attempts to improve the task generalization ability of pre-trained …