Digital design for efficient embedded machine learning processors
The rationale of the HYPERSCALES project is to provide an order of magnitude improvement in embedded real‐time sensory information extraction capabilities. This will be achieved by combining (i) scalable neural network architectures, (ii) aligned wide scalability in the hardware architecture and (iii) resource‐aware neural network training approaches that are jointly optimized for both, hyperdimensional information extraction and resource‐efficiency. We will pursue this through the realization of a very unique sensing concept exploited by animals and humans, yet absent in almost all electronic sensing systems: hyperdimensional sensing with dynamic resource‐scalability. Under this scheme, the sensing stage is capable of sensing a multitude of information across many sensory channels (hyperdimensional). Yet, for each sub‐task during operation, the resources used by the embedded platform (energy, memory, latency) are co‐optimized and scaled proportionally to the amount of information that needs to be extracted from the environment. As such, high accuracy recognition is feasible, yet at low average resource costs, enabling always‐on sensory awareness. To focus our research, our approaches will be validated and demonstrated on a scalable ring‐shaped network of low‐cost cameras with an overlapping field of view, achieving always‐on visual awareness with < 10mWatt average power consumption per camera. These sensors will enable omnidirectional image and depth awareness for devices such as drones, cars, buildings, etc. for safety, surveillance and automation applications.