Learning Explainable and Better Performing Representations of POMDP Strategies

A Bork, D Chakraborty, K Grover, J Křetínský… - … Conference on Tools …, 2024 - Springer
Strategies for partially observable Markov decision processes (POMDP) typically require
memory. One way to represent this memory is via automata. We present a method to learn …

Weakest precondition inference for non-deterministic linear array programs

S Sumanth Prabhu, D D'Souza, S Chakraborty… - … Conference on Tools …, 2024 - Springer
Precondition inference is an important problem with many applications. Existing
precondition inference techniques for programs with arrays have limited ability to find and …

Strong Simple Policies for POMDPs

L Winterer, R Wimmer, B Becker, N Jansen - International Journal on …, 2024 - Springer
The synthesis problem for partially observable Markov decision processes (POMDPs) is to
compute a policy that provably adheres to one or more specifications. Yet, the general …

Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives

QH Ho, MS Feather, F Rossi, ZN Sunberg… - arXiv preprint arXiv …, 2024 - arxiv.org
Partially Observable Markov Decision Processes (POMDPs) are powerful models for
sequential decision making under transition and observation uncertainties. This paper …

Tools and Algorithms for the Construction and Analysis of Systems LNCS 14571

This three-volume proceedings contains the papers presented at the 30th International
Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS …

[PDF][PDF] Search and Explore: Symbiotic Policy Synthesis in POMDPs

S Junges, JP Katoen, F Macák - publications.rwth-aachen.de
This paper marries two state-of-the-art controller synthesis methods for partially observable
Markov decision processes (POMDPs), a prominent model in sequential decision making …

[PDF][PDF] USING INHERITANCE DEPENDENCIES TO ACCELER-ATE ABSTRACTION-BASED SYNTHESIS OF FINITE-STATE CONTROLLERS FOR POMDPS.

A Shevchenko - theses.cz
Partially observable Markov decision process is an important model for autonomous
planning used in many areas, such as robotics and biology. This work focuses on the …

[PDF][PDF] USING REINFORCEMENT LEARNING AND IN-DUCTIVE SYNTHESIS FOR DESIGNING ROBUST CONTROLLERS IN POMDPS

BD HUDÁK - itspy.cz
A significant challenge in sequential decision-making involves dealing with uncertainty,
which arises from inaccurate sensors or only a partial knowledge of the agent's environment …