Never lost in the middle: Improving large language models via attention strengthening question answering

H Junqing, P Kunhao, D Xiaoqun, S Zhuoyang… - arXiv preprint arXiv …, 2023 - arxiv.org
While large language models (LLMs) are equipped with longer text input capabilities than
before, they are struggling to seek correct information in long contexts. The" lost in the …

[PDF][PDF] Explore, select, derive, and recall: Augmenting llm with human-like memory for mobile task automation

S Lee, J Choi, J Lee, H Choi, SY Ko… - arXiv preprint arXiv …, 2023 - researchgate.net
The advent of large language models (LLMs) has opened up new opportunities in the field
of mobile task automation. Their superior language understanding and reasoning …

Preference-Conditioned Language-Guided Abstraction

A Peng, A Bobu, BZ Li, TR Sumers… - Proceedings of the …, 2024 - dl.acm.org
Learning from demonstrations is a common way for users to teach robots, but it is prone to
spurious feature correlations. Recent work constructs state abstractions, ie visual …

AI for Mathematics: A Cognitive Science Perspective

CE Zhang, KM Collins, A Weller… - arXiv preprint arXiv …, 2023 - arxiv.org
Mathematics is one of the most powerful conceptual systems developed and used by the
human species. Dreams of automated mathematicians have a storied history in artificial …

Zero-shot compositional reinforcement learning in humans

People can easily evoke previously learned concepts, compose them, and apply the result
to solve novel tasks on the first attempt. The aim of this paper is to improve our …

Cognitive graphs: Representational substrates for planning

J Yoo, A Bornstein, ER Chrastil - 2023 - psyarxiv.com
Making plans for upcoming actions is a computationally demanding process. To mitigate
these demands, agents can build representations–of states, actions, and their sequential …

Importance of prefrontal meta control in human-like reinforcement learning

JH Lee, JZ Leibo, SJ An, SW Lee - Frontiers in Computational …, 2022 - frontiersin.org
Recent investigation on reinforcement learning (RL) has demonstrated considerable
flexibility in dealing with various problems. However, such models often experience difficulty …

m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks

Z Ma, W Huang, J Zhang, T Gupta… - Synthetic Data for …, 2024 - openreview.net
Real-world multi-modal problems are rarely solved by a single machine learning model, and
often require multi-step computational plans that involve stitching several models. Tool …

Exploring the hierarchical structure of human plans via program generation

CG Correa, S Sanborn, MK Ho, F Callaway… - arXiv preprint arXiv …, 2023 - arxiv.org
Human behavior is inherently hierarchical, resulting from the decomposition of a task into
subtasks or an abstract action into concrete actions. However, behavior is typically …

[PDF][PDF] Group coordination catalyzes individual and cultural intelligence

CM Wu, R Dale, RD Hawkins - 2023 - psyarxiv.com
A large program of research has aimed to ground large-scale cultural phenomena in
processes taking place within individual minds. For example, investigating whether …