< Terug naar vorige pagina

Publicatie

Self-learning Control for Residential Demand Response

Boek - Dissertatie

In recent years, there has been a significant increase in the share of variable renewable energy sources in the distribution grid. The intermittent, stochastic and uncontrollable nature of these energy sources introduces a need for flexibility from end users' devices and energy storage assets. This flexibility is used to balance electricity demand and supply, and avoid grid congestion issues. At the residential level, end users can offer flexibility through controllable loads or devices such as electric water boilers and heat pumps for space heating. Additionally, flexibility is increasingly provided by energy storage due to the decreasing prices of batteries. These batteries and controllable loads can therefore adapt their energy demand or supply profiles based on an external signal such as electricity price, and as such can participate in demand response programs. In the existing control paradigm, harnessing flexibility from these devices for demand response requires sufficiently accurate system models. Developing these system models is challenging and cost-ineffective due to the uncertainty from both residential end user behaviour and renewable energy generation profiles, the heterogeneity of end user behavioural patterns and the unique design of each building - the modelling process must be repeated for every new building or controllable device. While this modelling process could be economically feasible for large commercial buildings, it would not be cost-effective for large scale implementation in residential buildings. In the absence of these sufficiently accurate models, naive (rule-based) control techniques - requiring a set of predefined rules - have been widely employed. In general, the non-adaptive nature of pre-defined rules and system models to operational or hardware changes leads to sub-optimal system operation. A recent attempt to solve the challenges presented by the model-based and rulebased control paradigm in residential demand response applications has been to use a model-free (and data-driven) control technique called reinforcement learning. This technique directly learns a near optimal control policy from system observations without the need for system models or system identification. This dissertation builds on existing literature in reinforcement learning and contributes to its application in residential demand response from a single agent and multi-agent control perspective. The presented work focuses on utilising residential flexibility to maximise self-consumption of locally generated electricity in buildings and microgrids, which is beneficial for the end users to reduce their electricity costs and for the distribution system operator to avoid grid congestion problems. The flexibility is harnessed from typical controllable devices - heat pumps for space heating, electric vehicles and batteries - present at the residential level. This dissertation also investigates the integration of reinforcement learning with existing distributed optimisation techniques such as dual decomposition to assist in avoiding grid congestion problems. Additionally, the dissertation investigates the concept of global rewards in multi-agent reinforcement learning to allow energy sharing and optimisation towards a global system objective in microgrids without communication between the learning agents. At its heart, the presented work shows through simulation results - using real world data and realistic case studies - that reinforcement learning techniques are a suitable alternative to model-based and rule-based control, with a potential for assisting in the efficient integration of renewable energy into the power grid, and the development of an intelligent power grid through cost-effective operational control in buildings and microgrids.
Jaar van publicatie:2020
Toegankelijkheid:Open