Transparency as a Fundamental Principle of Trustworthy AI: Regulatory Framework and Challenges (Prof. Lauber-Rönsberg)
As part of our research, we are addressing two fundamental issues related to trustworthy AI. On the one hand, there are the ethical guidelines. Here we see transparency as a key requirement that AI systems should meet in order to be deemed trustworthy. This applies not only to implementation, where humans need to be aware that they are interacting with an AI system but also to transparency of the data, the system’s capabilities and AI business models as well as to explainable AI.
On the other hand, we need a legal framework. This includes data protection laws, where we need to clarify: What is the scope and efficiency of information obligations? What are the specific obligations for automated decision-making systems about the “logic involved”? And it includes fair trading and consumer protection laws as well, since data-driven marketing tools are shaping decision-making architectures, thus influencing consumers’ decisions, which raises the question on what is the legal regulation of subliminal marketing practices?