< Back to previous page

Project

End-user interpretability for complex machine learning models

Current AI-driven decision support systems - while being very effective - provide little explanation for their decisions. The aim of this thesis is to construct and research novel methods that allow the end user (e. g. the surgeon in tumour classification or the recruiter in applicant selection) to interpret and understand the output of the machine learning model, yielding more trust and accountability in medical and industrial settings.

Date:11 Oct 2019 →  11 Oct 2023
Keywords:machine learning
Disciplines:Machine learning and decision making
Project type:PhD project