< Back to previous page

Publication

Adaptive control of a mechatronic system using constrained residual reinforcement learning

Journal Contribution - Journal Article

In this article, we propose a simple, practical, and intuitive approach to improve the performance of a conventional controller in uncertain environments using deep reinforcement learning while maintaining safe operation. Our approach is motivated by the observation that conventional controllers in industrial motion control value robustness over adaptivity to deal with different operating conditions and are suboptimal as a consequence. Reinforcement learning, on the other hand, can optimize a control signal directly from input-output data and thus adapts to operational conditions but lacks safety guarantees, impeding its use in industrial environments. To realize adaptive control using reinforcement learning in such conditions, we follow a residual learning methodology, where a reinforcement learning algorithm learns corrective adaptations to a base controller's output to increase optimality. We investigate how constraining the residual agent's actions enables to leverage the base controller's robustness to guarantee safe operation. We detail the algorithmic design and propose to constrain the residual actions relative to the base controller to increase the method's robustness. Building on Lyapunov stability theory, we prove stability for a broad class of mechatronic closed-loop systems. We validate our method experimentally on a slider crank setup and investigate how the constraints affect the safety during learning and optimality after convergence.
Journal: IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
ISSN: 1557-9948
Issue: 10
Volume: 69
Pages: 10447 - 10456
Publication year:2022
Accessibility:Closed