< Back to previous page

Project

Reinforcement Learning as a Framework for Ethical AI

Artificial Intelligence (AI) makes decisions that have an ethical impact. For that reason, it is important that AI is aligned with human values. Reinforcement Learning (RL) is a promising method to develop AI. RL has been successfully applied to various domains in academia and industry, including robotics, self-driving cars, and strategic games like the board game Go. The goal of this project is to investigate whether RL is a suitable framework to implement ethical decision-making into AI. There are three research objectives. Firstly, investigating whether RL is conceptually limited to particular kinds of ethical theories; specifically, if RL can be used for deontological ethics. Secondly, examining whether approaches to learning the objective function are methodologically flawed. Thirdly, exploring how (if at all) RL agents can be aligned with the values of multiple humans.

Date:19 Nov 2021 →  Today
Keywords:Rational Choice Theory, AI Ethics, Reinforcement Learning, Artificial Intelligence
Disciplines:Ethics of technology, Human-centred and life-like robotics, Economic methodology, Game theory, economics, social and behavioural sciences
Project type:PhD project