< Back to previous page

Project

Robust Fleet-Wide Reinforcement Learning (FWOSB27)

The world is a connected place, in which the cloud plays an increasingly vital role. Modern wireless sensors allow physical devices to move away from local controllers towards smarter cloud-based architectures. When such devices are similar, aggregating information from many of them allows one to obtain a wider view of the problem, and achieve more effective, flexible and robust control. The challenge is how to organize the control in order to benefit from the similarities. We propose to automate this fleet-wide control using Reinforcement Learning. Our method must process information fast (objective 1), have the ability to share information among devices (objective 2) and continuously incorporate new information (objective 3). Our methods will be valorized in an offshore wind farm setting. The current trend is to place turbines together in a farm to minimize transmission costs and maximize energy output from the available space. Today, each wind turbine decides based on its own sensed information, rather than deciding based on the bigger picture: overall weather conditions, energy demand at that time and the current health conditions of the turbines in the farm. Fleet control will improve predictability of energy output for the electricity grid and reduce the risk of failure by reducing loads on turbines that are already damaged.
Date:1 Jan 2017 →  31 Dec 2020
Keywords:Reinforcement Learning, Informatics
Disciplines:Systems theory, modelling and identification