< Back to previous page

Publication

Hardware-Aware Probabilistic Models: Learning, Inference and Use Cases

Book - Dissertation

Throughout the last decade, cloud-computing paradigms have proven successful at exploiting the opportunities and addressing the challenges of the ever-increasing number of electronic devices populating our environment. However, this approach's privacy, latency, and efficiency concerns have motivated users and developers to seek computing paradigms that remain closer to the data-collecting devices, otherwise known as edge devices. One of the main challenges to overcome in this quest towards edge-computing is that the involved devices tend to be battery operated and portable, and therefore suffer from significant energy and computational bandwidth constraints. These challenges become particularly hard to overcome when the device has to execute complex algorithms relying on large amounts of high-quality sensory data, as is often the case with machine learning applications. Furthermore, smart portable devices are prone to dynamically changing environmental conditions and noise, and the sensory data they rely on is inherently uncertain. Probabilistic models constitute a suitable approach to address such challenging conditions, as they can represent the uncertainty inherent to edge applications, and are robust to noise and missing data. However, their implementation in resource-constrained edge devices has not been studied extensively, unlike other machine learning paradigms, such as neural networks, which have already seen tremendous progress in this research field. This thesis proposes to endow probabilistic models with hardware-awareness, in an attempt to enable their efficient implementation in resource-constrained edge devices. These models can represent scalable properties of the devices that host them, such as the quality with which their sensors operate, or the complexity of the inference algorithm they ought to perform. The strategies proposed in this thesis can use these models in evaluating the impact that a specific device configuration may have on resource consumption and performance of the machine learning task, with the overarching goal of balancing the two optimally. The proposed models can also consider various device sub-systems' properties holistically, bringing about resource-saving opportunities that other resource-aware approaches fail to uncover. The accuracy versus resource consumption trade-off achieved by the proposed models and strategies is empirically evaluated for several use cases, considering different types of systems with various scalable properties. In particular, this thesis shows how to augment and exploit the properties of Bayesian networks and Probabilistic Circuits to endow them with hardware-awareness. The range of systems considered proves to benefit from this thesis's contributions, with the potential of attaining significant resource-saving opportunities with minimal accuracy losses at application time.
Publication year:2020
Accessibility:Closed