Search and explore: symbiotic policy synthesis in POMDPs
This paper marries two state-of-the-art controller synthesis methods for partially observable
Markov decision processes (POMDPs), a prominent model in sequential decision making …
Markov decision processes (POMDPs), a prominent model in sequential decision making …
Learning logic specifications for policy guidance in pomdps: an inductive logic programming approach
Abstract Partially Observable Markov Decision Processes (POMDPs) are a powerful
framework for planning under uncertainty. They allow to model state uncertainty as a belief …
framework for planning under uncertainty. They allow to model state uncertainty as a belief …
Learning Explainable and Better Performing Representations of POMDP Strategies
Strategies for partially observable Markov decision processes (POMDP) typically require
memory. One way to represent this memory is via automata. We present a method to learn …
memory. One way to represent this memory is via automata. We present a method to learn …
Weakest precondition inference for non-deterministic linear array programs
S Sumanth Prabhu, D D'Souza, S Chakraborty… - … Conference on Tools …, 2024 - Springer
Precondition inference is an important problem with many applications. Existing
precondition inference techniques for programs with arrays have limited ability to find and …
precondition inference techniques for programs with arrays have limited ability to find and …
Tools at the frontiers of quantitative verification: QComp 2023 competition report
The analysis of formal models that include quantitative aspects such as timing or
probabilistic choices is performed by quantitative verification tools. Broad and mature tool …
probabilistic choices is performed by quantitative verification tools. Broad and mature tool …
Strong Simple Policies for POMDPs
The synthesis problem for partially observable Markov decision processes (POMDPs) is to
compute a policy that provably adheres to one or more specifications. Yet, the general …
compute a policy that provably adheres to one or more specifications. Yet, the general …
Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives
Partially Observable Markov Decision Processes (POMDPs) are powerful models for
sequential decision making under transition and observation uncertainties. This paper …
sequential decision making under transition and observation uncertainties. This paper …
Policies Grow on Trees: Model Checking Families of MDPs
Markov decision processes (MDPs) provide a fundamental model for sequential decision
making under process uncertainty. A classical synthesis task is to compute for a given MDP …
making under process uncertainty. A classical synthesis task is to compute for a given MDP …
Deductive controller synthesis for probabilistic hyperproperties
Probabilistic hyperproperties specify quantitative relations between the probabilities of
reaching different target sets of states from different initial sets of states. This class of …
reaching different target sets of states from different initial sets of states. This class of …
Tools at the Frontiers of Quantitative Verification
The analysis of formal models that include quantitative aspects such as timing or
probabilistic choices is performed by quantitative verification tools. Broad and mature tool …
probabilistic choices is performed by quantitative verification tools. Broad and mature tool …