[HTML][HTML] Few-shot learning for medical text: A review of advances, trends, and opportunities
Background: Few-shot learning (FSL) is a class of machine learning methods that require
small numbers of labeled instances for training. With many medical topics having limited …
small numbers of labeled instances for training. With many medical topics having limited …
Learn to explain: Multimodal reasoning via thought chains for science question answering
When answering a question, humans utilize the information available across different
modalities to synthesize a consistent and complete chain of thought (CoT). This process is …
modalities to synthesize a consistent and complete chain of thought (CoT). This process is …
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks
How well can NLP models generalize to a variety of unseen tasks when provided with task
instructions? To address this question, we first introduce Super-NaturalInstructions, a …
instructions? To address this question, we first introduce Super-NaturalInstructions, a …
“What it wants me to say”: Bridging the abstraction gap between end-user programmers and code-generating large language models
Code-generating large language models map natural language to code. However, only a
small portion of the infinite space of naturalistic utterances is effective at guiding code …
small portion of the infinite space of naturalistic utterances is effective at guiding code …
Thinking about gpt-3 in-context learning for biomedical ie? think again
The strong few-shot in-context learning capability of large pre-trained language models
(PLMs) such as GPT-3 is highly appealing for application domains such as biomedicine …
(PLMs) such as GPT-3 is highly appealing for application domains such as biomedicine …
Towards logiglue: A brief survey and a benchmark for analyzing logical reasoning capabilities of language models
Logical reasoning is fundamental for humans yet presents a substantial challenge in the
domain of Artificial Intelligence. Initially, researchers used Knowledge Representation and …
domain of Artificial Intelligence. Initially, researchers used Knowledge Representation and …
Style over substance: Evaluation biases for large language models
As large language models (LLMs) continue to advance, accurately and comprehensively
evaluating their performance becomes increasingly challenging. Conventionally, human …
evaluating their performance becomes increasingly challenging. Conventionally, human …
Bigbio: A framework for data-centric biomedical natural language processing
Training and evaluating language models increasingly requires the construction of meta-
datasets--diverse collections of curated data with clear provenance. Natural language …
datasets--diverse collections of curated data with clear provenance. Natural language …
Gimlet: A unified graph-text model for instruction-based molecule zero-shot learning
Molecule property prediction has gained significant attention in recent years. The main
bottleneck is the label insufficiency caused by expensive lab experiments. In order to …
bottleneck is the label insufficiency caused by expensive lab experiments. In order to …
nach0: Multimodal natural and chemical languages foundation model
Large Language Models (LLMs) have substantially driven scientific progress in various
domains, and many papers have demonstrated their ability to tackle complex problems with …
domains, and many papers have demonstrated their ability to tackle complex problems with …