Alexander Bork: Automated Verification of Partially Observable Stochastic Models

Contact

Name

Alexander Bork

Doctoral researcher

Phone

work
+49 241 80 21212

Email

E-Mail
 

Stochastic models like Markov decision processes (MDPs) and stochastic games are formalisms used in a wide array of domains to model systems where uncertainty and non-determinism are present.

They assume perfect information about the state of the system at any time. Thus, these models often optimistically overestimate the amount of information available to a decision procedure to determine an optimal course of action for a given objective.

In reality, a system's complete state is often hidden.

Partially observable probabilistic systems extend the commonly used models with the notion that only part of the system's state is observable and decisions must be made based only on the observable information. These systems find application in fields like artificial intelligence, robotics and economics. Their analysis is significantly more involved compared to the fully observable case. Intuitively, this is due to the significantly increased dependence on the information history for making optimal decisions.

The research project focuses on the analysis and computer-aided verification of partially observable MDPs (POMDPs). As perfect state information is not available, an optimal choice of action to satisfy a property is based on an estimate of the probabilities to be in states of the POMDP given the history of observations. These estimates are known as beliefs. These beliefs can be used to construct a fully observable, but typically infinite, belief MDP that captures the semantics of the POMDP.

Fully observable MDPs are well-studied in theory and many forms of analysis are tractable in practice, in particular for finite-state MDPs. However, even fundamental problems like the reachability problem (the question if a state of the system can be reached with a given probability at some point in time) are generally undecidable for finite-state POMDPs. As a consequence, related work typically restricts the considered properties to be bounded in time steps (finite horizon) or applies discounting in the computation to guarantee convergence. This, however, can severely distort results if a thorough analysis is desired.

The goal of the project is to develop novel, practically applicable verification methods for POMDPs in the infinite horizon without discounting.

As such, finite abstractions of the belief MDP for a given POMDP are used to provide both upper and lower bounds on the optimal value. These approximation algorithms leverage the existing knowledge in model checking finite MDPs to reason about POMDPs. An implementation of the approach as an extension of the probabilistic model checker shows its practical applicability.

Possible future directions include exploring compositional approaches to POMDP analysis, necessitating the definition of a compositional POMDP semantics and the development of model checking approaches for partially observable stochastic games.