[HTML][HTML] Few-shot learning for medical text: A review of advances, trends, and opportunities

Y Ge, Y Guo, S Das, MA Al-Garadi, A Sarker - Journal of Biomedical …, 2023 - Elsevier
Background: Few-shot learning (FSL) is a class of machine learning methods that require
small numbers of labeled instances for training. With many medical topics having limited …

Learn to explain: Multimodal reasoning via thought chains for science question answering

P Lu, S Mishra, T Xia, L Qiu… - Advances in …, 2022 - proceedings.neurips.cc
When answering a question, humans utilize the information available across different
modalities to synthesize a consistent and complete chain of thought (CoT). This process is …

Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks

Y Wang, S Mishra, P Alipoormolabashi, Y Kordi… - arXiv preprint arXiv …, 2022 - arxiv.org
How well can NLP models generalize to a variety of unseen tasks when provided with task
instructions? To address this question, we first introduce Super-NaturalInstructions, a …

“What it wants me to say”: Bridging the abstraction gap between end-user programmers and code-generating large language models

MX Liu, A Sarkar, C Negreanu, B Zorn… - Proceedings of the …, 2023 - dl.acm.org
Code-generating large language models map natural language to code. However, only a
small portion of the infinite space of naturalistic utterances is effective at guiding code …

Thinking about gpt-3 in-context learning for biomedical ie? think again

BJ Gutierrez, N McNeal, C Washington, Y Chen… - arXiv preprint arXiv …, 2022 - arxiv.org
The strong few-shot in-context learning capability of large pre-trained language models
(PLMs) such as GPT-3 is highly appealing for application domains such as biomedicine …

Towards logiglue: A brief survey and a benchmark for analyzing logical reasoning capabilities of language models

M Luo, S Kumbhar, M Parmar, N Varshney… - arXiv preprint arXiv …, 2023 - arxiv.org
Logical reasoning is fundamental for humans yet presents a substantial challenge in the
domain of Artificial Intelligence. Initially, researchers used Knowledge Representation and …

Style over substance: Evaluation biases for large language models

M Wu, AF Aji - arXiv preprint arXiv:2307.03025, 2023 - arxiv.org
As large language models (LLMs) continue to advance, accurately and comprehensively
evaluating their performance becomes increasingly challenging. Conventionally, human …

Bigbio: A framework for data-centric biomedical natural language processing

J Fries, L Weber, N Seelam, G Altay… - Advances in …, 2022 - proceedings.neurips.cc
Training and evaluating language models increasingly requires the construction of meta-
datasets--diverse collections of curated data with clear provenance. Natural language …

Gimlet: A unified graph-text model for instruction-based molecule zero-shot learning

H Zhao, S Liu, M Chang, H Xu, J Fu… - Advances in …, 2023 - proceedings.neurips.cc
Molecule property prediction has gained significant attention in recent years. The main
bottleneck is the label insufficiency caused by expensive lab experiments. In order to …

nach0: Multimodal natural and chemical languages foundation model

M Livne, Z Miftahutdinov, E Tutubalina… - Chemical …, 2024 - pubs.rsc.org
Large Language Models (LLMs) have substantially driven scientific progress in various
domains, and many papers have demonstrated their ability to tackle complex problems with …