Big Data and AI technologies are becoming increasingly significant in the lives of both individuals and society. However, the omnipresence of these technologies harbors numerous uncertainties and open questions: How will AI change our work, our lives, and the way we perceive ourselves? Against this background, how will we continue to take responsibility? Moreover, what kind of education will we need in the future? What actually is responsible AI?
Our research offers a framework for responsible innovation in the development and the employment of AI-related technology. Therefore, we focus on both technological and human factors that arise in the multiple interactions with AI. As an interdisciplinary team, we represent equally ethical, legal, technological, and pedagogical perspectives and address relevant issues on different levels.
We examine where, when, and how the development respectively the application of AI technologies is generating norms. This implies e.g., as a precondition to analyze the location of normatively relevant choices in technological design. Furthermore, we identify explicit and implicit anthropologists of all stakeholders, as these shape their views on and understandings of AI – including its acceptance, possible capabilities, and limitations. Transparency requirements for AI are of central importance in this context. The same applies to a regulatory framework for training data for AI, which is to be developed in the course of our research. In a broader sense, we investigate what changes the democratic discourse and its constitutional preconditions are facing due to the described changes from a legal perspective.
In addition, we are researching possibilities for the technological realization of the above-mentioned predominantly legally and politically shaped conditions and goals, for example via privacy preserving record linkage in data collection and Machine Learning. Furthermore, we aim at making language-processing systems transparent and explainable in relevant functions (e.g., in the course of argument-based explanations). Aside from privacy, transparency and explainability, we seek ways to advance language procession to improve its abilities at manipulating language and turning it into actionable knowledge.
Finally, yet importantly, we focus on ethical and educational questions for today and even more in the future. It is of crucial importance to understand AI as a technology as well as a factor that influences how we perceive ourselves and our sense of responsibility. Therefore, we analyze which competencies can already be taught in schools, especially about students‘ everyday life in which they interact with these technologies. This implies technological knowledge, application competencies as well as the ability to reflect upon the individual role when powerful technologies become part of our everyday life.
We understand responsible AI in a holistic sense as a research goal that can only be achieved through interdisciplinary discourse. We therefore combine our respective specialist perspectives in four relevant fields of research: trustworthy technology (in terms of fairness, security, transparency and explainability), trustworthy framework (legal, ethical and societal aspects), normative involvement (for example in the context of technical design issues) and education (for various reference groups, applications and developments). All of these issues are researched across disciplines and taught according to the premises of research-based teaching and learning.