< Terug naar vorige pagina

Publicatie

Opponent Modelling for Reinforcement Learning in Multi-Objective Normal Form Games

Boekbijdrage - Boekhoofdstuk Conferentiebijdrage

Ondertitel:Extended Abstract

In this paper, we investigate the effects of opponent modelling on multi-objective multi-agent interactions with non-linear utilities. Specifically, we consider multi-objective normal form games (MONFGs) with non-linear utility functions under the scalarised expected returns optimisation criterion. We contribute a novel actor-critic formulation to allow reinforcement learning of mixed strategies in this setting, along with an extension that incorporates opponent policy reconstruction using conditional action frequencies. Our empirical results demonstrate that opponent modelling can drastically alter the learning dynamics in this setting.

Boek: Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems, AAMAS 2020
Series: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Pagina's: 2080-2082
Aantal pagina's: 3
Jaar van publicatie:2020
Toegankelijkheid:Open