[PDF][PDF] Survey on sociodemographic bias in natural language processing
Deep neural networks often learn unintended bias during training, which might have harmful
effects when deployed in realworld settings. This work surveys 214 papers related to …
effects when deployed in realworld settings. This work surveys 214 papers related to …
Evaluating the susceptibility of pre-trained language models via handcrafted adversarial examples
HJ Branch, JR Cefalu, J McHugh, L Hujer… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent advances in the development of large language models have resulted in public
access to state-of-the-art pre-trained language models (PLMs), including Generative Pre …
access to state-of-the-art pre-trained language models (PLMs), including Generative Pre …
Chinese named entity recognition method for the finance domain based on enhanced features and pretrained language models
H Zhang, X Wang, J Liu, L Zhang, L Ji - Information Sciences, 2023 - Elsevier
For some named entities in the Chinese finance domain that are long, with difficult to
delineate boundaries and diverse forms of expression, we propose a method based on …
delineate boundaries and diverse forms of expression, we propose a method based on …
End-to-end self-debiasing framework for robust NLU training
Existing Natural Language Understanding (NLU) models have been shown to incorporate
dataset biases leading to strong performance on in-distribution (ID) test sets but poor …
dataset biases leading to strong performance on in-distribution (ID) test sets but poor …
What do we Really Know about State of the Art NER?
S Vajjala, R Balasubramaniam - arXiv preprint arXiv:2205.00034, 2022 - arxiv.org
Named Entity Recognition (NER) is a well researched NLP task and is widely used in real
world NLP scenarios. NER research typically focuses on the creation of new ways of training …
world NLP scenarios. NER research typically focuses on the creation of new ways of training …
Universal-KD: Attention-based output-grounded intermediate layer knowledge distillation
Intermediate layer matching is shown as an effective approach for improving knowledge
distillation (KD). However, this technique applies matching in the hidden spaces of two …
distillation (KD). However, this technique applies matching in the hidden spaces of two …
[HTML][HTML] Information extraction from German radiological reports for general clinical text and language understanding
M Jantscher, F Gunzer, R Kern, E Hassler… - Scientific Reports, 2023 - nature.com
Recent advances in deep learning and natural language processing (NLP) have opened
many new opportunities for automatic text understanding and text processing in the medical …
many new opportunities for automatic text understanding and text processing in the medical …
Understanding demonstration-based learning from a causal perspective
Demonstration-based learning has shown impressive performance in exploiting pretrained
language models under few-shot learning settings. It is interesting to see that …
language models under few-shot learning settings. It is interesting to see that …
Towards building more robust ner datasets: An empirical study on ner dataset bias from a dataset difficulty view
Recently, many studies have illustrated the robustness problem of Named Entity
Recognition (NER) systems: the NER models often rely on superficial entity patterns for …
Recognition (NER) systems: the NER models often rely on superficial entity patterns for …
Dumb: A benchmark for smart evaluation of dutch models
We introduce the Dutch Model Benchmark: DUMB. The benchmark includes a diverse set of
datasets for low-, medium-and high-resource tasks. The total set of nine tasks includes four …
datasets for low-, medium-and high-resource tasks. The total set of nine tasks includes four …