< Back to previous page

Project

Coordinating human and agent behaviour in collective-risk scenarios (FWOSB26)

Different situations, wherein humans interact among themselves or through technologies in hybrid socio-technical systems, resemble social dilemmas of which the collective risk game is one example. Within that context humans and/or agents need to decide whether how they are prepared to invest in a common goal, since when they do not disaster can occur that results in the loss of the accumulated benefits. Examples of this game are the climate change problem and also peer-to-peer energy markets. Experimental research has shown how humans behave, revealing that they only tend to invest when the risk of the disaster is very high. Game theoretical models have shown that these results can be explained by the changes risk introduces in the type of game that is being played. Yet how are humans really making their decisions? Can we capture this in agent-based models and can this knowledge be used to guide participants to invest earlier before the risks are becoming to high or to abstain from certain detrimental behaviour that results in the failure of the system? In this interdisciplinary project the aim is to use artificial intelligence techniques capable of modelling decision-making to provide answers to these questions. More specifically the aim is to collect experimental data that will allow us it infer and design models and finally use those models to build persuasive hybrid systems whose goal is to alter the behaviour of the participants to a more beneficial outcome..
Date:1 Jan 2017 →  31 Dec 2020
Keywords:collective-risk
Disciplines:Other computer engineering, information technology and mathematical engineering not elsewhere classified