Graduate Seminar: Jip Spel: Monotonicity in Markov Models

Friday, July 7, 2023, 10am

Location: RWTH Aachen University, Department of Computer Science - Ahornstr. 55, building E3, room 9222

Speaker: Jip Spel



Many systems exhibit probabilistic behavior, such as randomized protocols, communication protocols, or biological systems. Probabilistic model checking is a common way to analyze these type of systems. We often model these systems as Markov models and check them against a specification typically given in a probabilistic extension of LTL or CTL. One of the main goals is to analyze the probability to reach or the expected total reward upon reaching a set of target states. When checking for these properties, we assume probabilities to be fixed constants. In practice, however, these probabilities are often not fixed but bounded to a given interval. Moreover, these probabilities can be dependent on other probabilities. Therefore, we consider parametric Markov models, in which the probabilities are given symbolically by rational functions over parameters.  Common problems are the almost-optimal synthesis problem (i.e., finding a parameter instantiation such that the expected total reward is ε-close to the real unknown optimal expected total reward) and the feasibility problem (i.e., finding parameter instantiations that reach the target with at least a given probability or expected total reward).

In this talk, we introduce and define monotonicity of parametric Markov models. We observe that many systems are monotonic in one or more parameters, e.g., if we increase the probability of a parameter, the expected total reward also increases. We show how we can algorithmically find monotonicity and use this to, e.g., improve existing techniques like parameter lifting to solve the almost-optimal synthesis problem. We experimentally show that we significantly improve the time to determine almost-optimal synthesis. Furthermore, we consider the well-known gradient descent method to deal with the feasibility problem. We show that our implementation of the gradient descent method (as implemented in the modelchecker STORM) outperforms particle swarm optimization and quadratically-constrained quadratic programming and performs akin to sequential convex programming. Finally, we introduce parametric probabilistic timed automata (pPTAs). We extend probabilistic timed automata (PTAs) to pPTAs, such that the probabilities can be unknown. We show that some techniques to reduce PTAs to MDPs also work for pPTAs. An implementation of this reduction for pPTAs is available in the Modest Toolset.