< Back to previous page

Project

Assuring the safety of autonomous systems through human-centered executable assurance cases

While autonomous systems offer tremendous possibilities, they also come with major safety challenges. After all, human safety is central to every autonomous system. Existing safety assurance approaches and standards have been developed primarily for systems where a human can take over in the case of emergency and do not extend to autonomous systems. The combination of executable safety assurance cases and digital twins has tremendous potential as a solid safety framework for autonomous systems. However, the transition from classic static-safety assurance during design-time to dynamic safety guaranteed by the system itself without further human intervention is a big step to take. Therefore, this PhD proposal will focus on the concept of human-centered executable safety cases as an important intermediate step. More specifically, we will investigate the following research hypothesis: can we enable the safety engineer to dynamically re-evaluate the safety claims, arguments and assumptions underlying the safety assurance case and select the appropriate course of action by combining the concepts of executable safety assurance cases and digital twins. This research will create fundamental knowledge on dynamic risk management, necessary to assure the safety of modern high-tech, software-driven autonomous systems. Additionally, the results of this research supports the transition towards  increased (optimal) use of smart devices  and safe intelligent transport systems. 

Date:1 Oct 2021 →  Today
Keywords:Safety Assurance, Autonomous Systems, Hazard and Risk Management, AI/ML Based Systems, Industry 4.0
Disciplines:Artificial intelligence not elsewhere classified, Process safety, Product safety, Robotics and automatic control
Project type:PhD project