Guest Talk: Alessandro Abate: Certified Reinforcement Learning with Logic Guidance
Friday, February 14, 2020, 10:30am
Location: RWTH Aachen University, Department of Computer Science - Ahornstr. 55, building E3, room 9222
Speaker: Alessandro Abate
A model-free Reinforcement Learning (RL) framework is proposed, to synthesise policies for an unknown, and possibly continuous-state, Markov Decision Process (MDP), such that a given linear temporal property is satisfied.
We convert the given property into an automaton, namely a finite-state machine expressing the property. Exploiting the structure of the automaton, we shape an adaptive reward function on-the-fly, so that the RL algorithm can synthesise a policy resulting in traces that probabilistically satisfy the linear temporal property.
Under the assumption that the MDP has finite number of states, theoretical guarantees are provided on the convergence of the RL algorithm. Whenever the MDP has a continuous state space, we empirically show that our framework finds satisfying policies, if existing. Additionally, the proposed algorithm can handle time-varying periodic environments, and be extended to learn under safety requirements.
The performance of the proposed architecture is evaluated via a set of numerical examples and benchmarks, where we observe an improvement of one order of magnitude in the number of iterations required for the policy synthesis, compared to existing approaches (when available).