< Terug naar vorige pagina

Publicatie

Reinforcement learning for control of flexibility providers in a residential microgrid

Tijdschriftbijdrage - Tijdschriftartikel

The smart grid paradigm and development of smart meters has led to availability of large volumes of data. This data is expected to assist in power system planning/operation and the transition from passive to active electricity users. With recent advances in machine learning, this data can be used to learn system dynamics. This paper explores two model-free reinforcement learning (RL) techniques - policy iteration (PI) and fitted Q-iteration (FQI) for scheduling the operation of flexibility providers - battery and heat pump in a residential microgrid. The proposed algorithms are data-driven and can be easily generalized to fit the control of any flexibility provider without requiring expert knowledge to build a detailed model of the flexibility provider and/or microgrid. The algorithms are tested in multi-agent collaborative and single agent stochastic microgrid settings - with the uncertainty due to lack of knowledge on future electricity consumption patterns and photovoltaic production. Simulation results show that PI outperforms FQI with a 7.2% increase in photovoltaic self-consumption in the multi-agent setting and a 3.7% increase in the single agent setting. Both RL algorithms perform better than a rule-based controller, and compete with a model-based optimal controller, and are thus, a valuable alternative to model- and rule- based controllers.
Tijdschrift: IET Smart Grid
ISSN: 2515-2947
Issue: 1
Volume: 3
Pagina's: 1 - 11
Jaar van publicatie:2019
Toegankelijkheid:Open