[HTML][HTML] AMMU: a survey of transformer-based biomedical pretrained language models

KS Kalyan, A Rajasekharan, S Sangeetha - Journal of biomedical …, 2022 - Elsevier
Transformer-based pretrained language models (PLMs) have started a new era in modern
natural language processing (NLP). These models combine the power of transformers …

A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics

K He, R Mao, Q Lin, Y Ruan, X Lan, M Feng… - arXiv preprint arXiv …, 2023 - arxiv.org
The utilization of large language models (LLMs) in the Healthcare domain has generated
both excitement and concern due to their ability to effectively respond to freetext queries with …

[HTML][HTML] A review on Natural Language Processing Models for COVID-19 research

K Hall, V Chang, C Jayne - Healthcare Analytics, 2022 - Elsevier
This survey paper reviews Natural Language Processing Models and their use in COVID-19
research in two main areas. Firstly, a range of transformer-based biomedical pretrained …

Uncertainty estimation and reduction of pre-trained models for text regression

Y Wang, D Beck, T Baldwin, K Verspoor - Transactions of the …, 2022 - direct.mit.edu
State-of-the-art classification and regression models are often not well calibrated, and
cannot reliably provide uncertainty estimates, limiting their utility in safety-critical …

Iterative annotation of biomedical ner corpora with deep neural networks and knowledge bases

S Silvestri, F Gargiulo, M Ciampi - Applied sciences, 2022 - mdpi.com
The large availability of clinical natural language documents, such as clinical narratives or
diagnoses, requires the definition of smart automatic systems for their processing and …

Learning from unlabelled data for clinical semantic textual similarity

Y Wang, K Verspoor, T Baldwin - Proceedings of the 3rd Clinical …, 2020 - aclanthology.org
Abstract Domain pretraining followed by task fine-tuning has become the standard paradigm
for NLP tasks, but requires in-domain labelled data for task fine-tuning. To overcome this, we …

Rethinking STS and NLI in large language models

Y Wang, M Wang, P Nakov - arXiv preprint arXiv:2309.08969, 2023 - arxiv.org
In this study, we aim to rethink STS and NLI in the era of large language models (LLMs). We
first evaluate the accuracy of clinical/biomedical STS and NLI over five datasets, and then …

Effects of Human Adversarial and Affable Samples on BERT Generalizability

A Elangovan, J He, Y Li, K Verspoor - arXiv preprint arXiv:2310.08008, 2023 - arxiv.org
BERT-based models have had strong performance on leaderboards, yet have been
demonstrably worse in real-world settings requiring generalization. Limited quantities of …

EARA: Improving Biomedical Semantic Textual Similarity with Entity-Aligned Attention and Retrieval Augmentation

Y Xiong, X Yang, L Liu, KC Wong, Q Chen… - Findings of the …, 2023 - aclanthology.org
Abstract Measuring Semantic Textual Similarity (STS) is a fundamental task in biomedical
text processing, which aims at quantifying the similarity between two input biomedical …

[HTML][HTML] Incorporating domain knowledge into language models by using graph convolutional networks for assessing semantic textual similarity: Model development …

D Chang, E Lin, C Brandt, RA Taylor - JMIR medical informatics, 2021 - medinform.jmir.org
Background Although electronic health record systems have facilitated clinical
documentation in health care, they have also introduced new challenges, such as the …