[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …
News summarization and evaluation in the era of gpt-3
The recent success of zero-and few-shot prompting with models like GPT-3 has led to a
paradigm shift in NLP research. In this paper, we study its impact on text summarization …
paradigm shift in NLP research. In this paper, we study its impact on text summarization …
Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp
Abstract⚠ This paper contains prompts and model outputs that are offensive in nature. When
trained on large, unfiltered crawls from the Internet, language models pick up and reproduce …
trained on large, unfiltered crawls from the Internet, language models pick up and reproduce …
An empirical survey on long document summarization: Datasets, models, and metrics
Long documents such as academic articles and business reports have been the standard
format to detail out important issues and complicated subjects that require extra attention. An …
format to detail out important issues and complicated subjects that require extra attention. An …
Sparks: Inspiration for science writing using language models
Large-scale language models are rapidly improving, performing well on a wide variety of
tasks with little to no customization. In this work we investigate how language models can …
tasks with little to no customization. In this work we investigate how language models can …
[HTML][HTML] From sparse to dense: GPT-4 summarization with chain of density prompting
Selecting the “right” amount of information to include in a summary is a difficult task. A good
summary should be detailed and entity-centric without being overly dense and hard to …
summary should be detailed and entity-centric without being overly dense and hard to …
Sequence level contrastive learning for text summarization
Contrastive learning models have achieved great success in unsupervised visual
representation learning, which maximize the similarities between feature representations of …
representation learning, which maximize the similarities between feature representations of …
Planning with learned entity prompts for abstractive summarization
We introduce a simple but flexible mechanism to learn an intermediate plan to ground the
generation of abstractive summaries. Specifically, we prepend (or prompt) target summaries …
generation of abstractive summaries. Specifically, we prepend (or prompt) target summaries …
Automatic text summarization methods: A comprehensive review
Text summarization is the process of condensing a long text into a shorter version by
maintaining the key information and its meaning. Automatic text summarization can save …
maintaining the key information and its meaning. Automatic text summarization can save …