< Back to previous page

Publication

Machine learning-based hybrid worst-case resource analysis for embedded software and neural networks

Book - Dissertation

With the rise of the Internet of Things and Cyber Physical Systems with Artificial Intelligence, it is important to design computational powerful and energy efficient embedded systems. However, constraints that specifically apply to these embedded systems, such as real-time requirements, energy-aware computing, etc., remain equally important to create safe and reliable systems. These constraints are described as the usage of resources in the context of the worst-case scenario, or in other words the worst-case resource consumption. The need for more powerful and efficient systems is often tackled by introducing optimisation techniques in software and hardware, e.g. cache memory, vectorisation, multi-core processors, etc. Nevertheless, these techniques introduce a negative effect on the predictability of the system behaviour, and thus making it difficult or even impossible to predict the worst-case resource consumption of a task. Without knowledge of this value, it can never be proven that a specific implementation will meet its requirements. Different methodologies exist to determine the worst-case resource consumption, each with their advantages and disadvantages. Furthermore, these methods are only applicable on compiled code. When engineers have insight in the worst-case values during the development process, it will be possible to detect and prevent costly design flaws early-on. In this thesis, we tackle the previously mentioned issues by introducing a new hybrid methodology that combines machine learning to acquire early predictions on execution time and energy resources of embedded software and machine learning models, without the need to deploy and measure it on the physical hardware. We start with an in-depth analysis of current analysis methodologies, and software/hardware-related influences on the worst-case execution time and energy consumption of a software task. Based on this analysis, we compose a hybrid methodology step by step that consists of a measurement-based and static analysis layer. The first step consists of dividing the code or neural network into smaller components, or blocks. By changing the size of those hybrid blocks, we are able to create a balance between accuracy and computational complexity of both the measurement-based and static analysis layer. Next, we exchange the measurements by a trained machine learning model that analyses each block and provides an estimated upper bound prediction of its worst-case resource consumption. The third step is the extraction of a feature set with relevant software (and hardware/toolchain) attributes that characterises the software task (or neural network) and target platform with their influences on the worst-case resource consumption.
Number of pages: 178
Publication year:2022
Keywords:Doctoral thesis
Accessibility:Open