Sessions are listed chronologically with title, abstracts and brief bio of the lecturers (upon click). A program overview and tabular schedule are in the program section of the school’s main page.
The special session type Connecting the Dots offers a question & answers format, in which the audience can freely ask questions to a panel of lecturers (and possibly further experts).
10:00 – 11:15
11:30 – 12:30
This report explores the evolution and current state of neurosymbolic artificial intelligence, an approach that integrates neural network capabilities with symbolic reasoning. We trace the historical context from early AI aspirations to modern implementations and successes, highlighting key paradigms, and other logical and semantical considerations. We argue against the “scaling is all you need” hypothesis, and point to persistent challenges in reliable symbolic reasoning with deep and large models. Then we conclude by suggesting that despite numerous implementation choices and the “broad church” nature of neuro-symbolic AI, these approaches offer the most promising path towards AI systems that combine pattern recognition with robust reasoning, particularly for applications requiring structured knowledge, explainability, and trustworthiness.

Dr Vaishak Belle (he/him) is Reader at the University of Edinburgh, an Alan Turing Fellow, and a Royal Society University Research Fellow. He is also Director of Research and Innovation at the Bayes Centre. He has made a career out of doing research on the science and technology of AI. He has published close to 150 peer-reviewed articles, won best paper awards, and consulted with banks on explainability. As PI and CoI, he has secured a grant income of over 10 million pounds.
14:00 – 15:15
Knowledge Graph and Ontology Engineering refers to a set of tasks that are of central relevance to the life cycle of knowledge graphs and ontologies.
These include, for example:
All these tasks are known to be very hard, in that they have so far defied attempts to automate them at reasonable quality levels and scale. However, the recent advent of LLMs may finally open pathways to some meaningful automation, or at least assistance to human knowledge engineers. We will discuss some of these recent advances in using Large Language Models for Knowledge Graph and Ontology Engineering, with particular emphasis on modular ontologies and complex ontology alignment.

15:45 – 16:15
16:15 – 17:30
9:00 – 10:15
10:45 – 12:00
Deep Learning has recently caused very rapid advances in Artificial Intelligence, opening up unprecedented opportunities in machine learning. However, to date accuracy of deep learning systems can only be assessed statistically: they are essentially black boxes, meaning that their rationales for their decisions or predictions defy analysis. In this presentation we will discuss some recent results towards understanding what happens inside the black box. More precisely, we will present our work on using concept induction over background knowledge to assign meaningful labels to hidden neuron activations. We will in particular discuss our analysis of convolutional neural networks for scene recognition, and some more recent results using the same general method.

Pascal Hitzler is University Distinguished Professor and endowed Lloyd T. Smith Creativity in Engineering Chair at the Department of Computer Science at Kansas State University, Director (Research) of the Institute for Digital Agriculture and Advanced Analytics (ID3A), and Director of the Center for Artificial Intelligence and Data Science (CAIDS). He serves on the Kansas Legislature AI Taskforce and on the Samsung AI Advisory Board. His research record lists over 400 publications in such diverse areas as neuro-symbolic artificial intelligence, semantic web, knowledge graphs, knowledge representation and reasoning, denotational semantics, and set-theoretic topology, with over 19,000 citations. He was founding Editor-in-chief of the Semantic Web journal, the leading journal in the field, and is founding Editor-in-chief of the new Neurosymbolic Artificial Intelligence journal. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010, which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association’s Choice Magazine, and has translations into German and Chinese. He is founding steering committee member of the Neural-Symbolic Learning and Reasoning Association and the Association for Ontology Design and Patterns. For more information about him, see http://www.pascal-hitzler.de
12:00 – 12:30
14:00 – 15:15
15:45 – 17:15
15:45 – 17:15

The increasing adoption of large language models (LLMs) has raised serious concerns about their reliability and trustworthiness. As a result, growing attention is being directed toward evidence-based text generation with LLMs, which aims to link model outputs to supporting evidence to ensure traceability and verifiability. However, the field remains fragmented, with inconsistent terminology and diverse methodological approaches that can make it difficult to obtain a clear overview.
This presentation serves as an introduction to evidence-based text generation with LLMs. We clarify central concepts and terminology, present a unified taxonomy of system design choices, and examine representative approaches that incorporate citations, attributions, or quotations to ground model outputs in verifiable sources. In addition, we provide an overview of the current evaluation landscape, highlighting common strategies and remaining limitations in assessing evidence grounding and faithfulness. Finally, we discuss key challenges and promising directions for future work, equipping participants with a conceptual foundation for engaging with this rapidly evolving research area.

Tobias Schreieder is a PhD student at TU Dresden and ScaDS.AI. His research interests lie in the areas of natural language processing, information retrieval, trustworthy AI and privacy. With a focus on evidence-based text generation with LLMs, he develops methods that allow users to trace LLM-generated content back to their underlying sources through citations.

Prof. Dr.-Ing. Michael Färber is a Full Professor (W3) at the AI Center ScaDS.AI and TU Dresden, Germany, where he leads the “Scalable Software Architectures for Data Analytics” group. He previously served as Deputy Full Professor for Web Science at the Karlsruhe Institute of Technology (KIT). His research focuses on large language models, knowledge graphs, and graph neural networks, with an emphasis on trustworthy AI for science. He has published 120+ peer-reviewed papers at venues such as ACL, EMNLP, ISWC, CIKM, KDD, NAACL, and ICML.
9:00 – 10:00
Alessandra Mileo is Associate Professor in the School of Computing, Dublin City University. She is also a Principal Investigator in the Research Ireland Centre for Data Analytics and Funded Investigator in the Advanced Manufacturing Research Centre, and a Fellow of the Higher Education Academy (FHEA). Dr. Mileo has secured over 1.5 million euros in funding including national, international (EU, NSF) and industry-funded projects, publishing 100+ papers and is an active PC member of over 20 conferences and journals. Dr. Mileo is a member of the European AI alliance, the Italian Association for AI (AIIA), AAAI, the Association of Logic Programing (ALP) and a Steering Committee member of the Web Reasoning and Rule Systems Association (RRA) since 2015. Her current research agenda is focused on Explainable Artificial Intelligence, specifically leveraging Neuro-Symbolic Learning and Reasoning as well as Knowledge Graphs to support high-stake Decision Making. Dr. Mileo has recently been awarded the national Frontiers for the Future Project grant from Research Ireland: this independent 4-years grant aims to fund high-risk and high-reward research and will allow her to develop novel explainable and human-centered neuro-symbolic AI approaches in diagnostic imaging.
10:00 – 11:00
Neuro-Symbolic AI is becoming a fast-growing area of research. However, there is still a lot of potential for leveraging neuro-symbolic approaches to address the need for explainability and confidence. These are key requirements when it comes to using AI to support human experts in high-stake decision making. In this talk I will discuss how Neuron’s activation analysis and Knowledge Graphs and Deductive Reasoning can be used as key ingredients for the design of a neurosymbolic cycle for Human-centered explainability. I will discuss challenges in the design of such a cycle as well as opportunities for the adoption of Neuro-Symbolic AI in real world scenarios, using the field of Radiology as a reference scenario.
Alessandra Mileo is Associate Professor in the School of Computing, Dublin City University. She is also a Principal Investigator in the Research Ireland Centre for Data Analytics and Funded Investigator in the Advanced Manufacturing Research Centre, and a Fellow of the Higher Education Academy (FHEA). Dr. Mileo has secured over 1.5 million euros in funding including national, international (EU, NSF) and industry-funded projects, publishing 100+ papers and is an active PC member of over 20 conferences and journals. Dr. Mileo is a member of the European AI alliance, the Italian Association for AI (AIIA), AAAI, the Association of Logic Programing (ALP) and a Steering Committee member of the Web Reasoning and Rule Systems Association (RRA) since 2015. Her current research agenda is focused on Explainable Artificial Intelligence, specifically leveraging Neuro-Symbolic Learning and Reasoning as well as Knowledge Graphs to support high-stake Decision Making. Dr. Mileo has recently been awarded the national Frontiers for the Future Project grant from Research Ireland: this independent 4-years grant aims to fund high-risk and high-reward research and will allow her to develop novel explainable and human-centered neuro-symbolic AI approaches in diagnostic imaging.
11:45 – 13:00
9:00 – 10:15 | Part 1
10:45 – 12:00 | Part 2

Neurosymbolic AI aims to combine the strengths of learning and reasoning, but the field currently consists of many seemingly different approaches. This diversity makes it difficult to understand how existing methods relate to each other and how to design new neurosymbolic systems. In this tutorial, we present a unifying perspective showing that many neurosymbolic approaches can be understood through a shared formal framework based on deep parameterized logics. This perspective highlights common design dimensions underlying existing systems and clarifies how learning and logical reasoning interact. We connect these ideas to foundational concepts such as algebraic model counting and arithmetic circuits, and illustrate how they enable the principled construction of neurosymbolic systems in practice.

Vincent Derkinderen is a postdoctoral researcher in the Declarative Languages and Artificial Intelligence (DTAI) research group at KU Leuven. His research focus is on the foundations of neurosymbolic AI and the integration of neural learning with logical and probabilistic reasoning. His work builds on expertise in knowledge compilation and (weighted) model counting, including their applications to probabilistic logic programming and efficient probabilistic inference. More recently, he studies how these foundations extend to neurosymbolic settings, contributing to unifying formalisms such as deep parameterized logics and the DeepLog framework.

Giuseppe Marra is an Assistant Professor in the Declarative Languages and Artificial Intelligence (DTAI) research group at KU Leuven, where he co-leads the Neurosymbolic AI Lab with Prof. Luc De Raedt. His research focuses on the integration of neural computation and symbolic reasoning, with an emphasis on logical and probabilistic methods for neurosymbolic AI. He has contributed to several influential neurosymbolic frameworks and works on foundations and applications of neurosymbolic learning in areas such as concept based models and safe reinforcement learning.
13:30 – 14:30
Learning to construct new, interesting, and useful lemmas is an important and long-standing challenge in AI for mathematical reasoning, albeit less explored than automating reasoning itself. Historically, various symbolic and heuristic methods has been proposed, and the recent developments in generative AI has opened new opportunities to use neuro-symbolic architectures.
In this lecture, I will give an overview of the field, and in particular talk about some recent research on neuroscience-symbolic lemma conjecturing for proof assistants and formalised mathematics. Mathematicians and computer scientists are increasingly using proof assistants to formalize and check correctness of complex proofs. This is a non-trivial task in itself, however, with high demands on human expertise. Can we lower the bar by introducing automation for conjecturing helpful, interesting and novel lemmas? Automatically discovered lemmas can both aid a human user working on a mathematical formalization, strengthen automated theorem provers and perhaps also become useful in agentic workflows.
Moa Johansson is an associate professor at Chalmers University of Technology in Gothenburg, Sweden working on neuro-symbolic methods for mathematics, automated reasoning and formal methods in addition to application areas such as cognitive science and language. She did her PhD at the University of Edinburgh and has a longstanding interest in systems capable of assisting with creative steps of mathematical discovery, such as suggesting interesting, novel and useful lemmas in formalisations of maths.
15:00 – 15:30
15:30 – 17:00
15:30 – 17:00
9:00 – 10:15
Commonsense reasoning is a longstanding challenge, often called the “dark matter” of AI. This session will introduce relevant aspects of naive physics and folk psychology, as well as efforts to organize commonsense axioms, define dimensions of knowledge, and systematize reasoning. We will cover neural, symbolic, and hybrid (neuro-symbolic) techniques that promise robustness and explainability. We will discuss relevant tasks and benchmarks for commonsense reasoning. Finally, we will discuss the contextual defeasibility of commonsense reasoning and reflect on its universality from a moral and cultural perspective.
Filip Ilievski is an Assistant Professor at VU Amsterdam and a scientist at USC’s Information Sciences Institute. His research focuses on human-centric AI, specializing in commonsense reasoning, neurosymbolic methods, and analogy. Currently, he leads the NWO-funded “Human-Centric AI with Common Sense” project and а commonsense AI lab with 9 members. With over 100 publications and two books, Filip has developed foundational resources like the CommonSense Knowledge Graph (CSKG), methods for robust and explainable reasoning, and theory-aligned benchmarks for abstraction and reasoning. He holds leadership roles at the ELLIS Unit Amsterdam and the Digital Sustainability Center, and his work on visual abstraction was recently featured by the BBC.
10:45 – 12:00
Abstraction is the act of distilling experiences into an abstract schema, which can be used for more efficient learning, generalization to new scenarios, or explanation to humans. Abstraction is an interdisciplinary challenge, with extensive work in cognitive psychology, linguistics, perception, and AI. This session will discuss different notions of abstraction from an AI perspective. We will cover typical abstraction mechanisms and representations in AI, and their suitability in various scenarios. We will review studies on the ability of foundation models to perform adequate abstraction and contrast it with that of humans. Then, we will dive deeper into analogical abstraction. Finally, we will discuss the relationship between abstraction, context, and ambiguity.
Filip Ilievski is an Assistant Professor at VU Amsterdam and a scientist at USC’s Information Sciences Institute. His research focuses on human-centric AI, specializing in commonsense reasoning, neurosymbolic methods, and analogy. Currently, he leads the NWO-funded “Human-Centric AI with Common Sense” project and а commonsense AI lab with 9 members. With over 100 publications and two books, Filip has developed foundational resources like the CommonSense Knowledge Graph (CSKG), methods for robust and explainable reasoning, and theory-aligned benchmarks for abstraction and reasoning. He holds leadership roles at the ELLIS Unit Amsterdam and the Digital Sustainability Center, and his work on visual abstraction was recently featured by the BBC.
13:30 – 14:00
14:00 – 15:00
Neuro-symbolic AI combines the data-driven strengths of machine learning with the logical reasoning and transparency of symbolic systems, offering a transformative approach for industrial applications. While significant progress has been made in academic research, industrial adoption remains in its early stages. In this talk, I will present our journey at Bosch in trying to bridge the gap between research and real-world use cases, focusing on the combination of Answer Set Programming (ASP) and Knowledge Graphs with machine learning methods, e.g., large language models. I will present our attempts to apply these hybrid AI approaches in diverse domains such as conceptual system configuration, production optimization, and market analysis, while also highlighting key open research questions. Part of the work presented in this talk is the result of a collaboration between Bosch and the Vienna University of Technology.