Ammus: A survey of transformer-based pretrained models in natural language processing
KS Kalyan, A Rajasekharan, S Sangeetha - arXiv preprint arXiv …, 2021 - arxiv.org
Transformer-based pretrained language models (T-PTLMs) have achieved great success in
almost every NLP task. The evolution of these models started with GPT and BERT. These …
almost every NLP task. The evolution of these models started with GPT and BERT. These …
[HTML][HTML] A review on sentiment analysis from social media platforms
M Rodríguez-Ibánez, A Casánez-Ventura… - Expert Systems with …, 2023 - Elsevier
Sentiment analysis has proven to be a valuable tool to gauge public opinion in different
disciplines. It has been successfully employed in financial market prediction, health issues …
disciplines. It has been successfully employed in financial market prediction, health issues …
[HTML][HTML] ChatGPT: Jack of all trades, master of none
OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and
revolutionized the approach in artificial intelligence to human-model interaction. The first …
revolutionized the approach in artificial intelligence to human-model interaction. The first …
Rwkv: Reinventing rnns for the transformer era
Transformers have revolutionized almost all natural language processing (NLP) tasks but
suffer from memory and computational complexity that scales quadratically with sequence …
suffer from memory and computational complexity that scales quadratically with sequence …
Rethinking the role of demonstrations: What makes in-context learning work?
Large language models (LMs) are able to in-context learn--perform a new task via inference
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
Metaicl: Learning to learn in context
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training
framework for few-shot learning where a pretrained language model is tuned to do in …
framework for few-shot learning where a pretrained language model is tuned to do in …
A holistic approach to undesired content detection in the real world
We present a holistic approach to building a robust and useful natural language
classification system for real-world content moderation. The success of such a system relies …
classification system for real-world content moderation. The success of such a system relies …
From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models
Language models (LMs) are pretrained on diverse data sources, including news, discussion
forums, books, and online encyclopedias. A significant portion of this data includes opinions …
forums, books, and online encyclopedias. A significant portion of this data includes opinions …
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
Despite the success, the process of fine-tuning large-scale PLMs brings prohibitive
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …
Sentiment analysis in the era of large language models: A reality check
Sentiment analysis (SA) has been a long-standing research area in natural language
processing. It can offer rich insights into human sentiments and opinions and has thus seen …
processing. It can offer rich insights into human sentiments and opinions and has thus seen …