[HTML][HTML] Deep Learning applications for COVID-19

C Shorten, TM Khoshgoftaar, B Furht - Journal of big Data, 2021 - Springer
This survey explores how Deep Learning has battled the COVID-19 pandemic and provides
directions for future research on COVID-19. We cover Deep Learning applications in Natural …

Don't stop pretraining: Adapt language models to domains and tasks

S Gururangan, A Marasović, S Swayamdipta… - arXiv preprint arXiv …, 2020 - arxiv.org
Language models pretrained on text from a wide variety of sources form the foundation of
today's NLP. In light of the success of these broad-coverage models, we investigate whether …

Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets

SL Blodgett, G Lopez, A Olteanu, R Sim… - Proceedings of the …, 2021 - aclanthology.org
Auditing NLP systems for computational harms like surfacing stereotypes is an elusive goal.
Several recent efforts have focused on benchmark datasets consisting of pairs of contrastive …

Beyond semantic distance: Automated scoring of divergent thinking greatly improves with large language models

P Organisciak, S Acar, D Dumas… - Thinking Skills and …, 2023 - Elsevier
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity
measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test …

ARBERT & MARBERT: Deep bidirectional transformers for Arabic

M Abdul-Mageed, AR Elmadany… - arXiv preprint arXiv …, 2020 - arxiv.org
Pre-trained language models (LMs) are currently integral to many natural language
processing systems. Although multilingual LMs were also introduced to serve many …

Ensemble distillation for robust model fusion in federated learning

T Lin, L Kong, SU Stich, M Jaggi - Advances in neural …, 2020 - proceedings.neurips.cc
Federated Learning (FL) is a machine learning setting where many devices collaboratively
train a machine learning model while keeping the training data decentralized. In most of the …

IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages

D Kakwani, A Kunchukuttan, S Golla… - Findings of the …, 2020 - aclanthology.org
In this paper, we introduce NLP resources for 11 major Indian languages from two major
language families. These resources include:(a) large-scale sentence-level monolingual …

Elevater: A benchmark and toolkit for evaluating language-augmented visual models

C Li, H Liu, L Li, P Zhang, J Aneja… - Advances in …, 2022 - proceedings.neurips.cc
Learning visual representations from natural language supervision has recently shown great
promise in a number of pioneering works. In general, these language-augmented visual …

Pre-trained models for natural language processing: A survey

X Qiu, T Sun, Y Xu, Y Shao, N Dai, X Huang - Science China …, 2020 - Springer
Recently, the emergence of pre-trained models (PTMs) has brought natural language
processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs …

Language models are few-shot learners

T Brown, B Mann, N Ryder… - Advances in neural …, 2020 - proceedings.neurips.cc
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot
performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning …