Theory of Mind abilities of Large Language Models in Human-Robot Interaction: An Illusion?

M Verma, S Bhambri, S Kambhampati - Companion of the 2024 ACM …, 2024 - dl.acm.org
Large Language Models (LLMs) have shown exceptional generative abilities in various
natural language and generation tasks. However, possible anthropomorphization and …

Exploiting Unlabeled Data for Feedback Efficient Human Preference based Reinforcement Learning

M Verma, S Bhambri, S Kambhampati - arXiv preprint arXiv:2302.08738, 2023 - arxiv.org
Preference Based Reinforcement Learning has shown much promise for utilizing human
binary feedback on queried trajectory pairs to recover the underlying reward model of the …

A mental model based theory of trust

Z Zahedi, S Sreedharan, S Kambhampati - arXiv preprint arXiv …, 2023 - arxiv.org
Handling trust is one of the core requirements for facilitating effective interaction between the
human and the AI agent. Thus, any decision-making framework designed to work with …

Data Driven Reward Initialization for Preference based Reinforcement Learning

M Verma, S Kambhampati - arXiv preprint arXiv:2302.08733, 2023 - arxiv.org
Preference-based Reinforcement Learning (PbRL) methods utilize binary feedback from the
human in the loop (HiL) over queried trajectory pairs to learn a reward model in an attempt …

A State Augmentation based approach to Reinforcement Learning from Human Preferences

M Verma, S Kambhampati - arXiv preprint arXiv:2302.08734, 2023 - arxiv.org
Reinforcement Learning has suffered from poor reward specification, and issues for reward
hacking even in simple enough domains. Preference Based Reinforcement Learning …

Advice Conformance Verification by Reinforcement Learning agents for Human-in-the-Loop

M Verma, A Kharkwal, S Kambhampati - arXiv preprint arXiv:2210.03455, 2022 - arxiv.org
Human-in-the-loop (HiL) reinforcement learning is gaining traction in domains with large
action and state spaces, and sparse rewards by allowing the agent to take advice from HiL …

Computational Accounts of Trust in Human AI Interaction

Z Zahedi - 2023 - search.proquest.com
The growing presence of AI-driven systems in everyday life calls for the development of
efficient methods to facilitate interactions between humans and AI agents. At the heart of …

Modeling, Engendering and Leveraging Trust in Human-Robot Interaction: A Mental Model Based Framework

Z Zahedi - Companion of the 2024 ACM/IEEE International …, 2024 - dl.acm.org
Trust between team members is a necessary part of any successful cooperation. Therefore,
in mixed human-robot teams, the robot must possess the ability to model, assess and …