JavaScript is required to use this site. Please enable JavaScript in your browser settings.

Argument-based Explanations in Dynamic Environments

Title: Argument-based Explanations in Dynamic Environments

Duration: end of 2026

Research Area: AI Algorithms and Methods

The field of formal argumentation has become a vibrant research area in Artificial Intelligence covering aspects of knowledge representation, non-monotonic reasoning, multi-agent systems, and also philosophical questions. It deals with computational models of an argument and argumentation in general, as well as approaches and techniques formalizing inference on the basis of arguments. The leading formalism in the field uses so-called abstract argumentation frameworks (AFs) [Dung, 1995]. The main idea is that the evaluation of arguments can be done on an abstract level solely based on their interactions. Over the last 15 years more expressive formalisms have been introduced. One of the most powerful generalizations of Dung AFs are so-called abstract dialectical frameworks (ADFs) [Brewka et al., 2013]. The additional expressive power allows for arbitrary relationships between arguments, including single or collective attack and support relation. In this project we will deal with such expressive argumentation formalisms.

Aims

One primary objective is to study novel semantics for expressive argumentation formalisms, drawing inspiration from recent breakthroughs in classical argumentation, particularly the emerging concept of weak admissibility [Baumann et al., 2022]. Another key goal is to create explanations of the semantical outcome which are understandable for non-experts. Additionally, we will formally study how such explanations may evolve in a dynamic environment and whether it is possible to reuse already computed explanations.

Problem

In fact, many argumentation semantics are defined as fix points of certain operators. Clearly, such a description is hardly suitable for explaining a specific outcome to a user. One main problem is to find an easily understandable explanation that does not oversimplify the current semantic output [Baumann and Ulbricht, 2021]. Moreover, argumentation semantics are by nature non-monotonic. This means, adding further information may invalidate previously reasonable positions. Consequently, reusing former explanations is a quite challenging task.

Practical example

We have reached two main milestones of our project as of today. One of these is the study and Characterization of the Equivalence Between Atom-to-Atom Maps [1, 2], which allows us to determine sets of mathematically “well-behaved” reactions. With this, we can then produce the “alignment of graphs” associated to such reactions. We have implemented, in Python language, a proof of concept of the process of the Graph Alignment, which we describe in a manuscript recently submitted for revision [3, 4]. Such an implementation would be extended to C++ to study cases of greater complexity.

Outlook

Establishing theoretical foundations is essential for the successful application of expressive argumentation frameworks. Such a study contributes to making informed choices between different formalisms depending on the intended application. Powerful generalizations of abstract argumentation frameworks (AFs) empower us to tackle more complex applications in AI. For instance, they can serve as an additional explanatory component in AI systems or act as a suitable target formalism for instantiating knowledge bases.

Publications

  • Phan Minh Dung, On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-Person Games. Artif. Intell. 77(2): 321-358 (1995)
  • Gerhard Brewka, Hannes Strass, Stefan Ellmauthaler, Johannes Peter Wallner, Stefan Woltran, Abstract Dialectical Frameworks Revisited. IJCAI 2013: 803-809
  • Ringo Baumann, Gerhard Brewka, Markus Ulbricht, Shedding new light on the foundations of abstract argumentation: Modularization and weak admissibility. Artif. Intell. 310: 103742 (2022)
  • Ringo Baumann, Markus Ulbricht, Choices and their Consequences – Explaining Acceptable Sets in Abstract Argumentation Frameworks. KR 2021: 110-119

Team

Lead

  • Prof. Dr. Ringo Baumann

Team Members

  • Dr. Markus Ulbricht
  • Matti Berthold
  • Anne-Marie Heine
funded by:
Gefördert vom Bundesministerium für Bildung und Forschung.
Gefördert vom Freistaat Sachsen.