< Back to previous page

Publication

Privacy, due process and the computational turn. A parable and a first analysis

Book Contribution - Chapter

The ever-growing avalanche of databases during the last decade has been accompanied by a surge of advanced computational techniques. What makes many of these techniques tick is a variety of so-called 'machine learning algorithms'. Learning algorithms enable machines to generate models to make sense of large datasets in order to support human perception and decision making. The algorithms are partly epistemological extensions (like a pencil or a pair of glasses), and partly epistemological companions (like a sniffer or guide dog) whose models are capable of surprising and altering the perception of the user. Because learning algorithms are built by humans their epistemology is based on familiar ways of reasoning, generalizing and making sense. On the other hand, the logic leading up to a particular model is often shrouded in opacity: not only because algorithms are treated as business secrets, but even more frequently because the execution of the algorithms leads to complex sequences of calculative operations which are difficult to grasp or represent. This double role of so-called 'smart' machines - which are both extensions and companions - is characteristic for the computational turn. When learning algorithms are applied to make sense of data relating to humans this can affect the power balances, particularly those which are protected by the rights to privacy and due process, between those subjected to the techniques and those using them. The papers collected in this volume do not merely address the epistemological questions raised by the computational turn but relate them to choices faced by policy makers, engineers and citizens, and to the way these choices affect the constitutions of our societies. In this introductory chapter De Vries revisits the various contributions to this volume and looks at how they address issues related to the ecology (how to coexist with the perceptions of machines?) and pragmatism (can the formula 'whatever works best', when preferring one algorithm over another, be complicated by asking 'for whom?' and 'on which task'?) of the computational turn. The chapter opens with a parable, about three robotic dogs, presenting these issues of ecology and pragmatism in a playful manner. At the end of the chapter the parable is revisited. Armed with recommendations and ideas derived from the chapters collected in this volume it offers four practical lessons for the era of the computational turn: (1) When a machine has to make sense of the world based on general instructions regarding how to recognize similarity and patterns, rather than explicit, top-down definitions describing a particular class, the perception of the machine gains a degree of autonomy; (2) Machines can be instructed in many ways to 'learn' or generalize, and that there is no silver bullet that works in all situations; (3) It is difficult to tell which machine categorization is correct and which is incorrect; (4) Whether a solution is good or not, is not an objective but a political question.
Book: Privacy, Due Process and the Computational Turn
Pages: 9-38
Number of pages: 30
ISBN:978-0-415-64481-5
Publication year:2013
Keywords:machine learning algorithms, parable, pragmatism 2.0, ecology, epistemological extension, epistemological companion, polyphonic truth, unsupervised learning, supervised learning, Leibniz (Gottfried Wilhelm Leibniz)
  • VABB Id: c:vabb:377448
  • Scopus Id: 84901052465