Policy Explanation and Model Refinement in Decision-Theoretic Planning

OZ Khan - 2013 - uwspace.uwaterloo.ca
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for
sequential decision-making under uncertainty. MDPs provide a generic framework that can …

Abstraction and approximate decision-theoretic planning

R Dearden, C Boutilier - Artificial Intelligence, 1997 - Elsevier
Markov decision processes (MDPs) have recently been proposed as useful conceptual
models for understanding decision-theoretic planning. However, the utility of the associated …

Efficient solution algorithms for factored MDPs

C Guestrin, D Koller, R Parr, S Venkataraman - Journal of Artificial …, 2003 - jair.org
This paper addresses the problem of planning under uncertainty in large Markov Decision
Processes (MDPs). Factored MDPs represent a complex state space using state variables …

Planning with hidden parameter polynomial MDPs

C Costen, M Rigter, B Lacerda, N Hawes - Proceedings of the AAAI …, 2023 - ojs.aaai.org
For many applications of Markov Decision Processes (MDPs), the transition function cannot
be specified exactly. Bayes-Adaptive MDPs (BAMDPs) extend MDPs to consider transition …

Decision-theoretic planning: Structural assumptions and computational leverage

C Boutilier, T Dean, S Hanks - Journal of Artificial Intelligence Research, 1999 - jair.org
Planning under uncertainty is a central problem in the study of automated sequential
decision making, and has been addressed by researchers in many different fields, including …

Stochastic dynamic programming with factored representations

C Boutilier, R Dearden, M Goldszmidt - Artificial intelligence, 2000 - Elsevier
Markov decision processes (MDPs) have proven to be popular models for decision-theoretic
planning, but standard dynamic programming algorithms for solving MDPs rely on explicit …

[PDF][PDF] Decision making under uncertainty: operations research meets AI (again)

C Boutilier - AAAI/IAAI, 2000 - researchgate.net
Abstract Models for sequential decision making under uncertainty (eg, Markov decision
processes, or MDPs) have been studied in operations research for decades. The recent …

Reasoning about MDPs Abstractly: Bayesian Policy Search with Uncertain Prior Knowledge

J Molhoek - 2024 - repository.tudelft.nl
Many real-world problems fall in the category of sequential decision-making under
uncertainty; Markov Decision Processes (MDPs) are a common method for modeling such …

An introduction to fully and partially observable Markov decision processes

P Poupart - Decision theory models for applications in artificial …, 2012 - igi-global.com
The goal of this chapter is to provide an introduction to Markov decision processes as a
framework for sequential decision making under uncertainty. The aim of this introduction is …

Policy iteration for factored MDPs

D Koller, R Parr - arXiv preprint arXiv:1301.3869, 2013 - arxiv.org
Many large MDPs can be represented compactly using a dynamic Bayesian network.
Although the structure of the value function does not retain the structure of the process …