December 2, 2025
Our colleagues at ScaDS.AI Dresden/Leipzig Marvin Großer, Matti Berthold and Quentin Manière traveled to Melbourne, Australia, to attend the 22nd International Conference on Knowledge Representation and Reasoning (KR 2025) where they presented three papers and a video. KR 2025 is the top-conference in their research area. Knowledge Representation (KR), a well-established field of research within Artificial Intelligence, focuses on how to model human knowledge in a formal way. These formalisms allow KR-based AI Systems to excel at reasoning tasks, one of the main weaknesses of artificial neural networks. Notably, this enables the exploitation of knowledge that would otherwise remain implicit. This is achieved through semantically grounded inference mechanisms employed in symbolic reasoning engines. Mainly studied in this field are different kinds of these knowledge representation formalisms with respect to their expressive power, reasoning efficiency and explainability. KR has contributed to the theory and practice of various AI areas, including agents, automated planning, robotics, and natural language processing. It has also influenced fields beyond AI, including data management, the Semantic Web, verification, software engineering, computational biology and cybersecurity.
Marvin Großer and Quentin Manière are especially interested in Description Logics (DLs). DLs constitute a prominent formalism to express knowledge and notably underlie several Semantic Web standards. However, Matti Berthold is more interested in Abstract Argumentation, which is another domain of KR trying to model and reason about conflicting information without committing to the internal structure of arguments. Here is a summary of their presentations at KR 2025:
With this paper, Federica Di Stefano, Quentin Manière, Magdalena Ortiz and Mantas Šimkus won second place for the Best Paper Award. They proved that finding simple counterexamples to even basic statements in Description Logics is impossible. This means that, to understand “why is this simple claim false?” you might need to prepare for some complex explanations. The authors study reasoning with minimal models and demonstrate that minimizing all predicates leads to undecidability, even in very lightweight Description Logics. They then identify cyclicity conditions that restore decidability and connect these results to pointwise circumscription. The paper also shows that slightly extending DL-Lite significantly increases complexity. DL-Lite is a family of Description Logics designed to offer a good balance between expressive power and efficient reasoning.

With Maurice Funk and Carsten Lutz, Marvin Großer develops methods to construct implicit background knowledge, called ontology, from examples. Intuitively, such examples can represent things that must be reflected in the background knowledge, as well as things that should not be reflected in it. The authors study when it is possible to find an ontology that fits a given collection of examples and how to construct it. They consider different types of queries used in the examples (single facts, combined facts, and their unions) and the ontology languages ALC and ALCI. They show in their paper that finding a fitting ontology varies in difficulty depending on the query type. It is still feasible for simple queries but becomes challenging for more complex ones.
Matti Berthold presented join work with his colleagues Anna Rapberger from Dortmund and Lydia Blümel from Hagen on structured argumentation. Specifically, they studied explanations that ignore implausible arguments, as well as explanations that rely on a ground truth. The paper explores two alternative notions of admissibility—strong and weak admissibility—within assumption-based argumentation (ABA). It extends these notions from the simpler “flat” setting to more general, non-flat ABA. Using bipolar set-based argumentation frameworks, the authors define corresponding preferred, complete, and grounded semantics. They show that key modularity properties still hold and analyze remaining limitations compared to standard admissibility. They also introduce strong admissibility for ABA for the first time and discuss how to mitigate shared shortcomings.
Matti Berthold, Marvin Großer, Simon Hosemann, Quentin Manière, Moritz Schönherr and Lukas Schulze won the Best Video Award as well as the Public Choice Awards! Their video is an introduction in the chase algorithm. This is a central technique to enrich usual databases with some additional background knowledge (represented, for example, using DLs). Databases contain only ground observations and basic facts, but the chase lets you see the consequences of these facts. Motivated by the goal of enriching a database with new facts derived from meta-knowledge about a specific application domain, the chase algorithm incorporates this additional information to produce more complete and relevant answers to queries. By providing existential rules together with the dataset, the chase can uncover implicit knowledge and expand the data accordingly.
Feature photo: Matti Berthold, Marvin Großer, Simon Hosemann and Quentin Manière receive the Best Video Award from Antonio Rago and Magdalena Ortiz. Copyright © Maurice Pagnucco