< Back to previous page

Project

Improving the Interpretability, Bias, and Fairness of Process-Driven Decision Models

The avalanche of data-driven solutions nowadays is overwhelming. While data fuels the advances in machine learning and artificial intelligence, these areas have mostly focused on the computational aspects while often neglecting the interpretation, actionability, and implications of their results. Furthermore, the data as well as the algorithms used to obtain models of the data's provenance are often biased. These biases can lead to discrimination which falls under the term fairness in the context of machine learning-based design. This project aims to introduce these concepts in the light of process analysis, more specifically in the areas of process analysis pertaining to the use of predictive analytics. Most notably, two strands of research will be targeted, i.e., predictive process monitoring which focuses on the prediction of outcomes of procsses, future execution paths, as well as lead times, and decision mining in processes which untangles the various decisions present in a process to model their interdependencies on the various steps of the process. These areas have yet to introduce any systematic use of explainability, fairness, and bias mitigation techniques which are highly relevant. The results will include more transparent process analysis techniques that can capture fairness metrics and introduce bias mitigation to avoid underrepresented groups from being treated differently, e.g., in a loan application.

Date:1 Oct 2020 →  Today
Keywords:bias and fairness, process modeling, explainability
Disciplines:Machine learning and decision making, Data mining, Data collection and data estimation methodology, computer programs