< Back to previous page

Project

Discovering biased representations of people

Standard approaches to detecting and mitigating bias assume a fixed representation of people and of decisions concerning them in data, with some fixed utility notion (e.g. accuracy) and some formal measure of bias/fairness. Project KULEUVEN-1 (Discovering biased representations of people) will focus on representational adequacy, i.e., the extent to which data represent what is (legally and/or ethically) objectionable about bias. The project will develop a modelling language to integrate background knowledge, requirements engineering methods to elicit adequate representations, and composition methods to integrate different representations.

Date:1 Oct 2020 →  Today
Keywords:bias, knowledge representation, machine learning
Disciplines:Machine learning and decision making
Project type:PhD project