< Back to previous page

Project

Privacy-preserving AI

The purpose of our work is to attack the problem of securely training and exploiting AI models between several parties that are not allowed to share their data, study the possibility for several entities to combine their different private data sets to boost learning processes. Multiple parties often will not share their data for economic reasons or privacy legislation creating a need for privacy-preserving AI. Our aim is to allow privacy-preserving knowledge transfer among those parties in order to get better understanding of the various data sets. The importance of such techniques is immediately apparent in many AI applications in security or privacy-sensitive industries. Prominent examples are the healthcare and pharmaceutical sectors. Big data analytics can revolutionize healthcare by leveraging existing large and varied clinical and claims data sets split over many different owners (healthcare providers, hospitals, medical insurance companies) who will not share their data with outside entities. Similarly several pharmaceutical laboratories could leverage their joint expertise in drug discovery but cannot share their data sets for intellectual property reasons.

Date:22 Nov 2019 →  25 Sep 2022
Keywords:Artificial intelligence, Deep learning, Matrix factorizarion, Privacy, Secure multiparty computation, Bayesian networks, Federated learning, Multitask learning, Homomorphic encryption, Differential privacy
Disciplines:Artificial intelligence not elsewhere classified, Data mining, Statistics, Statistics and numerical methods not elsewhere classified, Applied mathematics in specific fields not elsewhere classified, Computer science, General mathematics, Cryptography, privacy and security
Project type:PhD project