< Back to previous page

Project

Counting shades of data quality: uncovering the notions of AI data quality, personal data quality and personal data accuracy

Deficits in the datasets used to develop an AI model could be a major source of inaccurate or biased predictions, which could have a major negative impact on individuals. Consequently, the need for safeguarding the quality of data used for developing AI systems has been gaining the attention of legal scholars and EU policymakers. The legal concept of AI data quality is, however, still fraught with significant uncertainty. Interestingly, scholars and policymakers have relied on privacy and data protection legislation to analyze this notion. There are indeed apparent similarities between the concept of AI data quality and the principle of personal data accuracy in data protection law: i) both are intended to safeguard against erroneous and unfair decisions affecting an individual; ii) ‘accuracy’ is frequently put forward as one of the requirements for AI data quality and is, at the same time, one of the core principles concerning the processing of personal data. The principle of accuracy in data protection law is, however, fraught with uncertainty too. It is, moreover, unclear whether, in data protection law data ‘accuracy’ is a mere synonym of data ‘quality’ or a sub-component of it. All these (terminological) uncertainties require clarification. This PhD project intends to: i) elucidate the scope and meaning of ‘personal data quality’ and ‘accuracy’ under EU data protection law in relation to data used for the development of AI systems; and ii) investigate how these notions could and should impact the evolution of a possible legal concept of ‘AI data quality’.
Date:1 Sep 2022 →  Today
Keywords:AI, data quality, personal data protection
Disciplines:Information law
Project type:PhD project