Predictive Privacy: Collective Privacy Challenges by AI and Big Data
Big data and artificial intelligence (AI) pose a new challenge for data protection when these techniques are used to make predictions about individuals based on the anonymous data of many other people. For example gender, sexual orientation, ethnicity, illnesses, psychological dispositions, etc. can be predicted from “behavioral data” (e.g. usage, tracking or activity data) of the target individual by means of predictive models that are trained from anonymously processed data of social media users. My talk first points out that there is considerable potential for abuse associated with so called “predictive analytics”, which manifests itself as social inequality, discrimination and exclusion. These potentials for abuse are not regulated by current data protection law (EU GDPR).
Under the term „predictive privacy“ I will then present an approach in privacy and data protection that counters the risks of abuse of predictive analytics. By its definition, the predictive privacy of a person or group is violated when sensitive information about them is predicted without their knowledge and against their will. I will formulate predictive privacy as a collectivist protected good of data protection. This leads to various suggestions for improving the GDPR with regard to the regulation of predictive analytics and inferred information. More information: https://predictiveprivacy.org
Rainer Mühlhoff is Professor of Ethics of Artificial Intelligence at the University of Osnabrück. His work focuses on ethics, social philosophy, and data protection in the context of digital media. In interdisciplinary collaborations, he brings together philosophy, media studies, and computer science to analyze the interplay of technology, power, and social change. Find Prof. Mühlhoffs website: here.
Back to the Summer School 2022 overview