Search and explore: symbiotic policy synthesis in POMDPs

R Andriushchenko, A Bork, M Češka, S Junges… - … on Computer Aided …, 2023 - Springer
This paper marries two state-of-the-art controller synthesis methods for partially observable
Markov decision processes (POMDPs), a prominent model in sequential decision making …

Learning logic specifications for policy guidance in pomdps: an inductive logic programming approach

D Meli, A Castellini, A Farinelli - Journal of Artificial Intelligence Research, 2024 - jair.org
Abstract Partially Observable Markov Decision Processes (POMDPs) are a powerful
framework for planning under uncertainty. They allow to model state uncertainty as a belief …

Learning Explainable and Better Performing Representations of POMDP Strategies

A Bork, D Chakraborty, K Grover, J Křetínský… - … Conference on Tools …, 2024 - Springer
Strategies for partially observable Markov decision processes (POMDP) typically require
memory. One way to represent this memory is via automata. We present a method to learn …

Weakest precondition inference for non-deterministic linear array programs

S Sumanth Prabhu, D D'Souza, S Chakraborty… - … Conference on Tools …, 2024 - Springer
Precondition inference is an important problem with many applications. Existing
precondition inference techniques for programs with arrays have limited ability to find and …

Tools at the frontiers of quantitative verification: QComp 2023 competition report

R Andriushchenko, A Bork, CE Budde, M Češka… - International …, 2024 - Springer
The analysis of formal models that include quantitative aspects such as timing or
probabilistic choices is performed by quantitative verification tools. Broad and mature tool …

Strong Simple Policies for POMDPs

L Winterer, R Wimmer, B Becker, N Jansen - International Journal on …, 2024 - Springer
The synthesis problem for partially observable Markov decision processes (POMDPs) is to
compute a policy that provably adheres to one or more specifications. Yet, the general …

Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives

QH Ho, MS Feather, F Rossi, ZN Sunberg… - arXiv preprint arXiv …, 2024 - arxiv.org
Partially Observable Markov Decision Processes (POMDPs) are powerful models for
sequential decision making under transition and observation uncertainties. This paper …

Policies Grow on Trees: Model Checking Families of MDPs

R Andriushchenko, M Češka, S Junges… - arXiv preprint arXiv …, 2024 - arxiv.org
Markov decision processes (MDPs) provide a fundamental model for sequential decision
making under process uncertainty. A classical synthesis task is to compute for a given MDP …

Deductive controller synthesis for probabilistic hyperproperties

R Andriushchenko, E Bartocci, M Češka… - … Evaluation of Systems, 2023 - Springer
Probabilistic hyperproperties specify quantitative relations between the probabilities of
reaching different target sets of states from different initial sets of states. This class of …

Tools at the Frontiers of Quantitative Verification

R Andriushchenko, A Bork, CE Budde, M Češka… - arXiv preprint arXiv …, 2024 - arxiv.org
The analysis of formal models that include quantitative aspects such as timing or
probabilistic choices is performed by quantitative verification tools. Broad and mature tool …