Biomedical question answering: a survey of approaches and challenges

Q Jin, Z Yuan, G Xiong, Q Yu, H Ying, C Tan… - ACM Computing …, 2022 - dl.acm.org
Automatic Question Answering (QA) has been successfully applied in various domains such
as search engines and chatbots. Biomedical QA (BQA), as an emerging QA task, enables …

Recent advances in natural language inference: A survey of benchmarks, resources, and approaches

S Storks, Q Gao, JY Chai - arXiv preprint arXiv:1904.01172, 2019 - arxiv.org
In the NLP community, recent years have seen a surge of research activities that address
machines' ability to perform deep language understanding which goes beyond what is …

A survey on deep learning approaches for text-to-SQL

G Katsogiannis-Meimarakis, G Koutrika - The VLDB Journal, 2023 - Springer
To bridge the gap between users and data, numerous text-to-SQL systems have been
developed that allow users to pose natural language questions over relational databases …

Grammar prompting for domain-specific language generation with large language models

B Wang, Z Wang, X Wang, Y Cao… - Advances in Neural …, 2024 - proceedings.neurips.cc
Large language models (LLMs) can learn to perform a wide range of natural language tasks
from just a handful of in-context examples. However, for generating strings from highly …

Transformers as soft reasoners over language

P Clark, O Tafjord, K Richardson - arXiv preprint arXiv:2002.05867, 2020 - arxiv.org
Beginning with McCarthy's Advice Taker (1959), AI has pursued the goal of providing a
system with explicit, general knowledge and having the system reason over that knowledge …

ProofWriter: Generating implications, proofs, and abductive statements over natural language

O Tafjord, BD Mishra, P Clark - arXiv preprint arXiv:2012.13048, 2020 - arxiv.org
Transformers have been shown to emulate logical deduction over natural language theories
(logical rules expressed in natural language), reliably assigning true/false labels to …

Injecting numerical reasoning skills into language models

M Geva, A Gupta, J Berant - arXiv preprint arXiv:2004.04487, 2020 - arxiv.org
Large pre-trained language models (LMs) are known to encode substantial amounts of
linguistic information. However, high-level reasoning skills, such as numerical reasoning …

Rasat: Integrating relational structures into pretrained seq2seq model for text-to-sql

J Qi, J Tang, Z He, X Wan, Y Cheng, C Zhou… - arXiv preprint arXiv …, 2022 - arxiv.org
Relational structures such as schema linking and schema encoding have been validated as
a key component to qualitatively translating natural language into SQL queries. However …

Learning contextual representations for semantic parsing with generation-augmented pre-training

P Shi, P Ng, Z Wang, H Zhu, AH Li, J Wang… - Proceedings of the …, 2021 - ojs.aaai.org
Most recently, there has been significant interest in learning contextual representations for
various NLP tasks, by leveraging large scale text corpora to train powerful language models …

On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex

TY Zhuo, Z Li, Y Huang, F Shiri, W Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Semantic parsing is a technique aimed at constructing a structured representation of the
meaning of a natural-language question. Recent advancements in few-shot language …