[HTML][HTML] AMMU: a survey of transformer-based biomedical pretrained language models
KS Kalyan, A Rajasekharan, S Sangeetha - Journal of biomedical …, 2022 - Elsevier
Transformer-based pretrained language models (PLMs) have started a new era in modern
natural language processing (NLP). These models combine the power of transformers …
natural language processing (NLP). These models combine the power of transformers …
A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics
The utilization of large language models (LLMs) in the Healthcare domain has generated
both excitement and concern due to their ability to effectively respond to freetext queries with …
both excitement and concern due to their ability to effectively respond to freetext queries with …
[HTML][HTML] A review on Natural Language Processing Models for COVID-19 research
This survey paper reviews Natural Language Processing Models and their use in COVID-19
research in two main areas. Firstly, a range of transformer-based biomedical pretrained …
research in two main areas. Firstly, a range of transformer-based biomedical pretrained …
Uncertainty estimation and reduction of pre-trained models for text regression
State-of-the-art classification and regression models are often not well calibrated, and
cannot reliably provide uncertainty estimates, limiting their utility in safety-critical …
cannot reliably provide uncertainty estimates, limiting their utility in safety-critical …
Iterative annotation of biomedical ner corpora with deep neural networks and knowledge bases
The large availability of clinical natural language documents, such as clinical narratives or
diagnoses, requires the definition of smart automatic systems for their processing and …
diagnoses, requires the definition of smart automatic systems for their processing and …
Learning from unlabelled data for clinical semantic textual similarity
Abstract Domain pretraining followed by task fine-tuning has become the standard paradigm
for NLP tasks, but requires in-domain labelled data for task fine-tuning. To overcome this, we …
for NLP tasks, but requires in-domain labelled data for task fine-tuning. To overcome this, we …
Rethinking STS and NLI in large language models
In this study, we aim to rethink STS and NLI in the era of large language models (LLMs). We
first evaluate the accuracy of clinical/biomedical STS and NLI over five datasets, and then …
first evaluate the accuracy of clinical/biomedical STS and NLI over five datasets, and then …
Effects of Human Adversarial and Affable Samples on BERT Generalizability
BERT-based models have had strong performance on leaderboards, yet have been
demonstrably worse in real-world settings requiring generalization. Limited quantities of …
demonstrably worse in real-world settings requiring generalization. Limited quantities of …
EARA: Improving Biomedical Semantic Textual Similarity with Entity-Aligned Attention and Retrieval Augmentation
Abstract Measuring Semantic Textual Similarity (STS) is a fundamental task in biomedical
text processing, which aims at quantifying the similarity between two input biomedical …
text processing, which aims at quantifying the similarity between two input biomedical …
[HTML][HTML] Incorporating domain knowledge into language models by using graph convolutional networks for assessing semantic textual similarity: Model development …
Background Although electronic health record systems have facilitated clinical
documentation in health care, they have also introduced new challenges, such as the …
documentation in health care, they have also introduced new challenges, such as the …