< Back to previous page

Project

Design and Interpret: A New Framework for Explainable AI

Deep neural networks (DNNs) have shown tremendous performance improvement in a wide range of applications. However, they have one important shortcoming: they are often considered as black boxes, as their inner processes and generalization capabilities are not fully understood. In this project, we aim at tackling this problem, by developing  a new framework for AI that's explainable and interpretable. Two complementary research directions will be investigated. First, we  argue that knowledge about the data structure should be incorporated in the design of DNNs, i.e. prior to network training, leading to network transparency, i.e. networks that are more interpretable by design. We will apply this mostly to inverse problems such as image denoising, superresolution and inpainting. Second, we will develop trustworthy methods for post-hoc interpretation and explanation, that analyze the behavior of a network after it has been trained. This will be demonstrated on image classification as well as object detection problems. We expect both strategies to reinforce one another, leading not only to more explainable models, but also better performing ones, outperforming the current state-of-the-art.

Date:1 Jan 2020 →  31 Dec 2023
Keywords:Deep neural networks (DNNs), AI, image classification, object detection problems
Disciplines:Computer vision, Pattern recognition and neural networks, Image and language processing