Recent advances in natural language processing via large pre-trained language models: A survey
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …
Machine knowledge: Creation and curation of comprehensive knowledge bases
Equipping machines with comprehensive knowledge of the world's entities and their
relationships has been a longstanding goal of AI. Over the last decade, large-scale …
relationships has been a longstanding goal of AI. Over the last decade, large-scale …
Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?
While pretrained models such as BERT have shown large gains across natural language
understanding tasks, their performance can be improved by further training the model on a …
understanding tasks, their performance can be improved by further training the model on a …
MRQA 2019 shared task: Evaluating generalization in reading comprehension
We present the results of the Machine Reading for Question Answering (MRQA) 2019
shared task on evaluating the generalization capabilities of reading comprehension …
shared task on evaluating the generalization capabilities of reading comprehension …
Supervised open information extraction
We present data and methods that enable a supervised learning approach to Open
Information Extraction (Open IE). Central to the approach is a novel formulation of Open IE …
Information Extraction (Open IE). Central to the approach is a novel formulation of Open IE …
AmbigQA: Answering ambiguous open-domain questions
Ambiguity is inherent to open-domain question answering; especially when exploring new
topics, it can be difficult to ask questions that have a single, unambiguous answer. In this …
topics, it can be difficult to ask questions that have a single, unambiguous answer. In this …
Break It Down: A Question Understanding Benchmark
Understanding natural language questions entails the ability to break down a question into
the requisite steps for computing its answer. In this work, we introduce a Question …
the requisite steps for computing its answer. In this work, we introduce a Question …
Evaluating factuality in generation with dependency-level entailment
Despite significant progress in text generation models, a serious limitation is their tendency
to produce text that is factually inconsistent with information in the input. Recent work has …
to produce text that is factually inconsistent with information in the input. Recent work has …
Zero-shot event extraction via transfer learning: Challenges and insights
Event extraction has long been a challenging task, addressed mostly with supervised
methods that require expensive annotation and are not extensible to new event ontologies …
methods that require expensive annotation and are not extensible to new event ontologies …
Transforming question answering datasets into natural language inference datasets
Existing datasets for natural language inference (NLI) have propelled research on language
understanding. We propose a new method for automatically deriving NLI datasets from the …
understanding. We propose a new method for automatically deriving NLI datasets from the …