[HTML][HTML] Addressing 6 challenges in generative AI for digital health: A scoping review
Generative artificial intelligence (AI) can exhibit biases, compromise data privacy,
misinterpret prompts that are adversarial attacks, and produce hallucinations. Despite the …
misinterpret prompts that are adversarial attacks, and produce hallucinations. Despite the …
Mapping the challenges of HCI: An application and evaluation of ChatGPT and GPT-4 for cost-efficient question answering
J Oppenlaender, J Hämäläinen - arXiv preprint arXiv:2306.05036, 2023 - arxiv.org
Large language models (LLMs), such as ChatGPT and GPT-4, are gaining wide-spread real
world use. Yet, the two LLMs are closed source, and little is known about the LLMs' …
world use. Yet, the two LLMs are closed source, and little is known about the LLMs' …
Quiet-star: Language models can teach themselves to think before speaking
E Zelikman, G Harik, Y Shao, V Jayasiri… - arXiv preprint arXiv …, 2024 - arxiv.org
When writing and talking, people sometimes pause to think. Although reasoning-focused
works have often framed reasoning as a method of answering questions or completing …
works have often framed reasoning as a method of answering questions or completing …
Understanding transformer reasoning capabilities via graph algorithms
Which transformer scaling regimes are able to perfectly solve different classes of algorithmic
problems? While tremendous empirical advances have been attained by transformer-based …
problems? While tremendous empirical advances have been attained by transformer-based …
Beyond the Frontier: Predicting Unseen Walls from Occupancy Grids by Learning from Floor Plans
L Ericson, P Jensfelt - IEEE Robotics and Automation Letters, 2024 - ieeexplore.ieee.org
In this paper, we tackle the challenge of predicting the unseen walls of a partially observed
environment as a set of 2D line segments, conditioned on occupancy grids integrated along …
environment as a set of 2D line segments, conditioned on occupancy grids integrated along …
Tokenization counts: the impact of tokenization on arithmetic in frontier llms
AK Singh, DJ Strouse - arXiv preprint arXiv:2402.14903, 2024 - arxiv.org
Tokenization, the division of input text into input tokens, is an often overlooked aspect of the
large language model (LLM) pipeline and could be the source of useful or harmful inductive …
large language model (LLM) pipeline and could be the source of useful or harmful inductive …
SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully
Large language models (LLMs) demonstrate great performance in text generation. However,
LLMs are still suffering from hallucinations. In this work, we propose an inference-time …
LLMs are still suffering from hallucinations. In this work, we propose an inference-time …
" Sorry, Come Again?" Prompting--Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
Hallucination has emerged as the most vulnerable aspect of contemporary Large Language
Models (LLMs). In this paper, we introduce the Sorry, Come Again (SCA) prompting, aimed …
Models (LLMs). In this paper, we introduce the Sorry, Come Again (SCA) prompting, aimed …
How Far Can Transformers Reason? The Locality Barrier and Inductive Scratchpad
Can Transformers predict new syllogisms by composing established ones? More generally,
what type of targets can be learned by such models from scratch? Recent works show that …
what type of targets can be learned by such models from scratch? Recent works show that …
Block Transformer: Global-to-Local Language Modeling for Fast Inference
This paper presents the Block Transformer architecture which adopts hierarchical global-to-
local modeling to autoregressive transformers to mitigate the inference bottlenecks of self …
local modeling to autoregressive transformers to mitigate the inference bottlenecks of self …