Paul Brunzema: Adapting to Changing Environments in Control


Model-based control uses a model of the system dynamics to find good control actions by predicting the evolution of the system over time. Modeling complex models from first principles can be difficult, so learning dynamics directly from data, utilizing e.g. neural networks, is desirable. This presents several new challenges, such as training data often being uncertain and sparse; in addition, training such a model can be costly. Moreover, real dynamical systems can undergo changes due to factors such as wear and tear, which can lead to poor control performance.

Such dynamical systems subject to uncertainty and change are at the core of my research. One aspect is the question of how to detect these changes in dynamics based on data, with the goal of (re)learning / adapting a dynamics model only when necessary. We do this by using event triggers and by exploiting uncertainty estimates, e.g. from Bayesian neural networks. Once these changes are detected, the question arises how to efficiently adapt the model to the changed dynamics to maximize the control performance over time. For this, architectures such as neural processes are promising. As not all control methods rely on a model for online decision making, I also aim to make classical control methods (such as PID control) adaptive to changing dynamics. Here, I combine the event-triggering ideas mentioned above with black-box optimization methods such as Bayesian optimization to optimize an unknown performance function in a changing environment.