Partially observable markov decision processes in robotics: A survey

M Lauri, D Hsu, J Pajarinen - IEEE Transactions on Robotics, 2022 - ieeexplore.ieee.org
Noisy sensing, imperfect control, and environment changes are defining characteristics of
many real-world robot tasks. The partially observable Markov decision process (POMDP) …

Dec-MCTS: Decentralized planning for multi-robot active perception

G Best, OM Cliff, T Patten, RR Mettu… - … International Journal of …, 2019 - journals.sagepub.com
We propose a decentralized variant of Monte Carlo tree search (MCTS) that is suitable for a
variety of tasks in multi-robot active perception. Our algorithm allows each robot to optimize …

Robust multiple-path orienteering problem: Securing against adversarial attacks

G Shi, L Zhou, P Tokekar - IEEE Transactions on Robotics, 2023 - ieeexplore.ieee.org
The multiple-path orienteering problem asks for paths for a team of robots that maximize the
total reward collected while satisfying budget constraints on the path length. This problem …

Autonomous thermalling as a partially observable markov decision process (extended version)

I Guilliard, R Rogahn, J Piavis, A Kolobov - arXiv preprint arXiv …, 2018 - arxiv.org
Small uninhabited aerial vehicles (sUAVs) commonly rely on active propulsion to stay
airborne, which limits flight time and range. To address this, autonomous soaring seeks to …

Neuromorphic Robust Framework for Concurrent Estimation and Control in Dynamical Systems using Spiking Neural Networks

R Ahmadvand, SS Sharif, YM Banad - arXiv preprint arXiv:2310.03873, 2023 - arxiv.org
Concurrent estimation and control of robotic systems remains an ongoing challenge, where
controllers rely on data extracted from states/parameters riddled with uncertainties and …

Robust and adaptive planning under model uncertainty

A Sharma, J Harrison, M Tsao, M Pavone - Proceedings of the …, 2019 - ojs.aaai.org
Planning under model uncertainty is a fundamental problem across many applications of
decision making and learning. In this paper, we propose the Robust Adaptive Monte Carlo …

Towards Uncertainty in Decision: A Survey on Recent Advances and Challenges in Bayesian Reinforcement Learning

Z Wang, H Meng, Z Zhou, Y Feng, Y Gao, C Yu - 2022 - researchsquare.com
Reinforcement learning is a research paradigm that is commonly utilized to tackle problems
involving sequential decision-making. Agents learn optimum policy from samples generated …

SACBP: Belief space planning for continuous-time dynamical systems via stochastic sequential action control

H Nishimura, M Schwager - The International Journal of …, 2021 - journals.sagepub.com
We propose a novel belief space planning technique for continuous dynamics by viewing
the belief system as a hybrid dynamical system with time-driven switching. Our approach is …

Active motion-based communication for robots with monocular vision

H Nishimura, M Schwager - 2018 IEEE International …, 2018 - ieeexplore.ieee.org
In this paper, we consider motion as a means of sending messages between robots. We
focus on a scenario in which a message is encoded in a sending robot's trajectory, and …

Baddr: Bayes-adaptive deep dropout rl for pomdps

S Katt, H Nguyen, FA Oliehoek, C Amato - arXiv preprint arXiv:2202.08884, 2022 - arxiv.org
While reinforcement learning (RL) has made great advances in scalability, exploration and
partial observability are still active research topics. In contrast, Bayesian RL (BRL) provides …