[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4

KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …

Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

News summarization and evaluation in the era of gpt-3

T Goyal, JJ Li, G Durrett - arXiv preprint arXiv:2209.12356, 2022 - arxiv.org
The recent success of zero-and few-shot prompting with models like GPT-3 has led to a
paradigm shift in NLP research. In this paper, we study its impact on text summarization …

Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp

T Schick, S Udupa, H Schütze - Transactions of the Association for …, 2021 - direct.mit.edu
Abstract⚠ This paper contains prompts and model outputs that are offensive in nature. When
trained on large, unfiltered crawls from the Internet, language models pick up and reproduce …

An empirical survey on long document summarization: Datasets, models, and metrics

HY Koh, J Ju, M Liu, S Pan - ACM computing surveys, 2022 - dl.acm.org
Long documents such as academic articles and business reports have been the standard
format to detail out important issues and complicated subjects that require extra attention. An …

Sparks: Inspiration for science writing using language models

KI Gero, V Liu, L Chilton - Proceedings of the 2022 ACM Designing …, 2022 - dl.acm.org
Large-scale language models are rapidly improving, performing well on a wide variety of
tasks with little to no customization. In this work we investigate how language models can …

[HTML][HTML] From sparse to dense: GPT-4 summarization with chain of density prompting

G Adams, AR Fabbri, F Ladhak… - Proceedings of the …, 2023 - pmc.ncbi.nlm.nih.gov
Selecting the “right” amount of information to include in a summary is a difficult task. A good
summary should be detailed and entity-centric without being overly dense and hard to …

Sequence level contrastive learning for text summarization

S Xu, X Zhang, Y Wu, F Wei - Proceedings of the AAAI conference on …, 2022 - ojs.aaai.org
Contrastive learning models have achieved great success in unsupervised visual
representation learning, which maximize the similarities between feature representations of …

Planning with learned entity prompts for abstractive summarization

S Narayan, Y Zhao, J Maynez, G Simões… - Transactions of the …, 2021 - direct.mit.edu
We introduce a simple but flexible mechanism to learn an intermediate plan to ground the
generation of abstractive summaries. Specifically, we prepend (or prompt) target summaries …

Automatic text summarization methods: A comprehensive review

G Sharma, D Sharma - SN Computer Science, 2022 - Springer
Text summarization is the process of condensing a long text into a shorter version by
maintaining the key information and its meaning. Automatic text summarization can save …