Project
Provable Safe Learning-based Control of Uncertain Systems
Mathematical models are an integral part of the control engineer's toolbox. They provide powerful ways of predicting the behavior of systems and enable designers to exploit the full potential of the available resources. In some cases however models aren't available, due to some inherent complexity. Examples are human interactions, which must be considered in the design of self-driving cars, aerodynamic effects on drones when flying in close proximity to objects, etc. A lack of models introduces uncertainty, which we will actively mitigate through learning.
To realize this we will develop learning architectures based on principles from statistics, which will decrease the uncertainty over time, thereby increasing the performance of the system and decreasing the risk. The inherent difficulty in developing such learning controllers is guaranteeing correct and safe behavior. We will therefore employ design methodologies to produce controllers that are resilient to uncertainty. More specifically we will employ the new paradigm of risk-averse control, proposed by the research group of the promoter. The key contributions of the project will then be (i) an arsenal of learning control methodologies, (ii) an algorithmic framework that allows for efficient implementation of the controllers and (iii) attractive demonstrations of the practical viability of the project's results.