JavaScript is required to use this site. Please enable JavaScript in your browser settings.

Scientific Program

Sessions are listed chronologically with title, abstracts and brief bio of the lecturers (upon click). A program overview and tabular schedule are in the program section of the school’s main page.

The special session type Connecting the Dots offers a question & answers format, in which the audience can freely ask questions to a panel of lecturers (and possibly further experts).

Large Language Models: Capabilities, Limitations and the Way Forward

Marco Valentino
10:00 – 11:15

Portrait of Marco Valentino.The emergence of Large Language Models (LLMs) has fundamentally redefined the paradigm of Natural Language Processing (NLP), transitioning the field from task-specific architectures toward unified, general-purpose models. This lecture provides a rigorous examination of the current state of LLM research, beginning with the foundational technical aspects of the Transformer architecture and the underlying mechanisms of self-attention. Subsequently, we will investigate the extent of LLM capabilities and limitations, with a specific focus on the nature of reasoning and generalization.

We will analyze empirical evidence derived from robust evaluation frameworks and mechanistic interpretability to diagnose whether LLMs reflect genuine logical processing or sophisticated surface-level heuristics, specifically addressing the tension between formal and plausible reasoning. The session concludes by exploring the future trajectory of the field, evaluating the integration of neuro-symbolic AI as a primary framework for reconciling statistical pattern matching with formal, structured reasoning.

Portrait of Marco Valentino.

Marco Valentino is a Lecturer in Artificial Intelligence and Applications of AI in the Natural Language Processing (NLP) group at the University of Sheffield. Prior to Sheffield, he was a member of the Neuro-Symbolic AI Group at the Idiap Research Institute in Switzerland, and obtained a PhD in Computer Science from the University of Manchester. His research focuses on developing AI systems that can use explanation as a core mechanism for learning and reasoning, investigating the integration of neural and symbolic AI methods. Moreover, he is interested in developing methodologies to interpret, control, and evaluate Large Language Models (LLMs), with a focus on disentangling knowledge acquisition from abstract logical reasoning, and enabling out-of-distribution, out-of-domain generalisation. His research on neuro-symbolic NLP received a best resource paper award at EMNLP 2025 and an outstanding paper award at EMNLP 2024.

The Future is Neuro-Symbolic: Where Has It Been and Where Is It Going?

Vaishak Belle
11:30 – 12:30

Portrait of Vaiskak BelleThis report explores the evolution and current state of neurosymbolic artificial intelligence, an approach that integrates neural network capabilities with symbolic reasoning. We trace the historical context from early AI aspirations to modern implementations and successes, highlighting key paradigms, and other logical and semantical considerations. We argue against the “scaling is all you need” hypothesis, and point to persistent challenges in reliable symbolic reasoning with deep and large models. Then we conclude by suggesting that despite numerous implementation choices and the “broad church” nature of neuro-symbolic AI, these approaches offer the most promising path towards AI systems that combine pattern recognition with robust reasoning, particularly for applications requiring structured knowledge, explainability, and trustworthiness.

Portrait of Vaishak Belle

Dr Vaishak Belle (he/him) is Reader at the University of Edinburgh, an Alan Turing Fellow, and a Royal Society University Research Fellow. He is also Director of Research and Innovation at the Bayes Centre. He has made a career out of doing research on the science and technology of AI. He has published close to 150 peer-reviewed articles, won best paper awards, and consulted with banks on explainability. As PI and CoI, he has secured a grant income of over 10 million pounds.

LLMs for KG and Ontology Engineering

Pascal Hitzler
14:00 – 15:15

Portrait of Pascal HitzlerKnowledge Graph and Ontology Engineering refers to a set of tasks that are of central relevance to the life cycle of knowledge graphs and ontologies. 
These include, for example:

  • ontology modeling (i.e., construction)
  • ontology population (i.e., creating a knowledge graph with the given ontology as schema)
  • ontology extension and modification
  • ontology alignment
  • and entity disambiguation.

All these tasks are known to be very hard, in that they have so far defied attempts to automate them at reasonable quality levels and scale. However, the recent advent of LLMs may finally open pathways to some meaningful automation, or at least assistance to human knowledge engineers. We will discuss some of these recent advances in using Large Language Models for Knowledge Graph and Ontology Engineering, with particular emphasis on modular ontologies and complex ontology alignment.

Portrait of Pascal Hitzler

Pascal Hitzler is University Distinguished Professor and endowed Lloyd T. Smith Creativity in Engineering Chair at the Department of Computer Science at Kansas State University, Director (Research) of the Institute for Digital Agriculture and Advanced Analytics (ID3A), and Director of the Center for Artificial Intelligence and Data Science (CAIDS). He serves on the Kansas Legislature AI Taskforce and on the Samsung AI Advisory Board. His research record lists over 400 publications in such diverse areas as neuro-symbolic artificial intelligence, semantic web, knowledge graphs, knowledge representation and reasoning, denotational semantics, and set-theoretic topology, with over 19,000 citations. He was founding Editor-in-chief of the Semantic Web journal, the leading journal in the field, and is founding Editor-in-chief of the new Neurosymbolic Artificial Intelligence journal. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010, which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association’s Choice Magazine, and has translations into German and Chinese. He is founding steering committee member of the Neural-Symbolic Learning and Reasoning Association and the Association for Ontology Design and Patterns. For more information about him, see http://www.pascal-hitzler.de.

Connecting the Dots

15:45 – 16:15


Unifying Plausible and Formal Reasoning via LLM-Driven Neuro-Symbolic Integration

Marco Valentino
16:15 – 17:30

Portrait of Marco Valentino.A persistent challenge in artificial intelligence is the effective integration of plausible and formal reasoning: the former concerning the contextual relevance and likelihood of arguments, and the latter focusing on their logical and structural validity. Large Language Models (LLMs) are uniquely positioned within this tension. While their extensive pre-training enables the generation of highly plausible and linguistically fluent discourse, they often lack the systematicity and consistency required for robust logical reasoning. However, the flexible representational space of LLMs also offers novel opportunities to study and mitigate this intrinsic conflict.

This lecture examines several emerging research trajectories aimed at reconciling these two modes of reasoning. We will discuss LLM-driven neuro-symbolic integration, the use of quasi-symbolic abstractions to bridge neural and formal representations, and the role of latent circuit disentanglement in identifying the internal mechanisms responsible for specific reasoning tasks. The session concludes by addressing the persisting challenges in achieving truly unified reasoning and outlining future directions for the field.

Portrait of Marco Valentino.

Marco Valentino is a Lecturer in Artificial Intelligence and Applications of AI in the Natural Language Processing (NLP) group at the University of Sheffield. Prior to Sheffield, he was a member of the Neuro-Symbolic AI Group at the Idiap Research Institute in Switzerland, and obtained a PhD in Computer Science from the University of Manchester. His research focuses on developing AI systems that can use explanation as a core mechanism for learning and reasoning, investigating the integration of neural and symbolic AI methods. Moreover, he is interested in developing methodologies to interpret, control, and evaluate Large Language Models (LLMs), with a focus on disentangling knowledge acquisition from abstract logical reasoning, and enabling out-of-distribution, out-of-domain generalisation. His research on neuro-symbolic NLP received a best resource paper award at EMNLP 2025 and an outstanding paper award at EMNLP 2024.


Foundations of Neural Networks – Part 1

Ostap Okhrin
9:00 – 10:15

Portrait of Ostap OkhrinThese lectures (Part 1+2) link classical statistical estimation with modern neural networks from a convergence and inference perspective. Starting with parametric models such as the sample mean and linear regression, we recall root-n rates and asymptotic normality. We then move to nonparametric methods, highlighting the trade-off between flexibility and statistical efficiency and the role of dimensionality. Neural networks are introduced as flexible function estimators. We discuss universal approximation results, convergence rates under structural assumptions, and minimax optimality. Finally, recent developments on statistical inference for neural networks, including asymptotic normality and bootstrap methods, are reviewed, together with current challenges.

Photo. Ostap Okhrin

Dr. Ostap Okhrin is Professor of Applied Statistics at the Faculty of Transportation at TU Dresden. He is the author and co-author of nearly 100 publications in mathematical and applied statistics, econometrics, and reinforcement learning, with applications in finance, economics, and autonomous driving.

Analysis of Hidden Neuron Activations in Deep Learning Models using Concept Induction Reasoning

Pascal Hitzler
10:45 – 12:00

Portrait of Pascal HitzlerDeep Learning has recently caused very rapid advances in Artificial Intelligence, opening up unprecedented opportunities in machine learning. However, to date accuracy of deep learning systems can only be assessed statistically: they are essentially black boxes, meaning that their rationales for their decisions or predictions defy analysis. In this presentation we will discuss some recent results towards understanding what happens inside the black box. More precisely, we will present our work on using concept induction over background knowledge to assign meaningful labels to hidden neuron activations. We will in particular discuss our analysis of convolutional neural networks for scene recognition, and some more recent results using the same general method.

Portrait of Pascal Hitzler

Pascal Hitzler is University Distinguished Professor and endowed Lloyd T. Smith Creativity in Engineering Chair at the Department of Computer Science at Kansas State University, Director (Research) of the Institute for Digital Agriculture and Advanced Analytics (ID3A), and Director of the Center for Artificial Intelligence and Data Science (CAIDS). He serves on the Kansas Legislature AI Taskforce and on the Samsung AI Advisory Board. His research record lists over 400 publications in such diverse areas as neuro-symbolic artificial intelligence, semantic web, knowledge graphs, knowledge representation and reasoning, denotational semantics, and set-theoretic topology, with over 19,000 citations. He was founding Editor-in-chief of the Semantic Web journal, the leading journal in the field, and is founding Editor-in-chief of the new Neurosymbolic Artificial Intelligence journal. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010, which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association’s Choice Magazine, and has translations into German and Chinese. He is founding steering committee member of the Neural-Symbolic Learning and Reasoning Association and the Association for Ontology Design and Patterns. For more information about him, see http://www.pascal-hitzler.de

Connecting the Dots

12:00 – 12:30

Foundations of Neural Networks – Part 2

Ostap Okhrin
14:00 – 15:15

Portrait of Ostap OkhrinThese lectures (Part 1+2) link classical statistical estimation with modern neural networks from a convergence and inference perspective. Starting with parametric models such as the sample mean and linear regression, we recall root-n rates and asymptotic normality. We then move to nonparametric methods, highlighting the trade-off between flexibility and statistical efficiency and the role of dimensionality. Neural networks are introduced as flexible function estimators. We discuss universal approximation results, convergence rates under structural assumptions, and minimax optimality. Finally, recent developments on statistical inference for neural networks, including asymptotic normality and bootstrap methods, are reviewed, together with current challenges.

Photo. Ostap Okhrin

Dr. Ostap Okhrin is Professor of Applied Statistics at the Faculty of Transportation at TU Dresden. He is the author and co-author of nearly 100 publications in mathematical and applied statistics, econometrics, and reinforcement learning, with applications in finance, economics, and autonomous driving.

Solving Mazes Using LLMs

Lalith Manjunath
15:45 – 17:15

Photo. Lalith Manjunath.This hands-on, hackathon-style workshop explores the intersection of Large Language Models and classical problem-solving. While traditional Symbolic AI relies on explicit logic and rules to navigate environments like mazes, we will investigate a different approach: using LLMs as latent probability generators. Participants will learn how to use structured prompting as a functional interface to extract navigational logic from these models. The session will cover the challenges of spatial reasoning in generative AI and how to bridge the gap between probabilistic outputs and deterministic maze-solving requirements.

Photo. Lalith Manjunath.

Lalith Manjunath is a researcher and PhD student at TU Dresden / ScaDS.AI Dresden/Leipzig, where his work focuses on the intersection of Deep Learning, High-Performance Computing, and NLP. He is interested in the deployment of Large Language Models within latency-critical environments like simulations and video games. This requires a holistic approach, where interesting research ideas must combine with reliable engineering practises from leveraging hardware-specific efficient kernels to managing data challenges. His research explores how to architect these systems so that probabilistic model outputs are effectively integrated with deterministic logic without compromising real-time performance.

Evidence-Based Text Generation with LLMs

Tobias Schreieder, Michael Färber
15:45 – 17:15

Portrait of Michael FaerberPortrait of Tobias SchreiederThe increasing adoption of large language models (LLMs) has raised serious concerns about their reliability and trustworthiness. As a result, growing attention is being directed toward evidence-based text generation with LLMs, which aims to link model outputs to supporting evidence to ensure traceability and verifiability. However, the field remains fragmented, with inconsistent terminology and diverse methodological approaches that can make it difficult to obtain a clear overview.

This presentation serves as an introduction to evidence-based text generation with LLMs. We clarify central concepts and terminology, present a unified taxonomy of system design choices, and examine representative approaches that incorporate citations, attributions, or quotations to ground model outputs in verifiable sources. In addition, we provide an overview of the current evaluation landscape, highlighting common strategies and remaining limitations in assessing evidence grounding and faithfulness. Finally, we discuss key challenges and promising directions for future work, equipping participants with a conceptual foundation for engaging with this rapidly evolving research area.

Portrait of Tobias schreieder

Tobias Schreieder is a PhD student at TU Dresden and ScaDS.AI. His research interests lie in the areas of natural language processing, information retrieval, trustworthy AI and privacy. With a focus on evidence-based text generation with LLMs, he develops methods that allow users to trace LLM-generated content back to their underlying sources through citations.

Portrait of Michael Färber

Prof. Dr.-Ing. Michael Färber is a Full Professor (W3) at the AI Center ScaDS.AI and TU Dresden, Germany, where he leads the “Scalable Software Architectures for Data Analytics” group. He previously served as Deputy Full Professor for Web Science at the Karlsruhe Institute of Technology (KIT). His research focuses on large language models, knowledge graphs, and graph neural networks, with an emphasis on trustworthy AI for science. He has published 120+ peer-reviewed papers at venues such as ACL, EMNLP, ISWC, CIKM, KDD, NAACL, and ICML.


TBD

Alessandra Mileo, Daria Stepanova
9:00 – 10:00

Portrait. Alessandra MileoAlessandra Mileo is Associate Professor in the School of Computing, Dublin City University. She is also a Principal Investigator in the Research Ireland Centre for Data Analytics and Funded Investigator in the Advanced Manufacturing Research Centre, and a Fellow of the Higher Education Academy (FHEA). Dr. Mileo has secured over 1.5 million euros in funding including national, international (EU, NSF) and industry-funded projects, publishing 100+ papers and is an active PC member of over 20 conferences and journals. Dr. Mileo is a member of the European AI alliance, the Italian Association for AI (AIIA), AAAI, the Association of Logic Programing (ALP) and a Steering Committee member of the Web Reasoning and Rule Systems Association (RRA) since 2015. Her current research agenda is focused on Explainable Artificial Intelligence, specifically leveraging Neuro-Symbolic Learning and Reasoning as well as Knowledge Graphs to support high-stake Decision Making. Dr. Mileo has recently been awarded the national Frontiers for the Future Project grant from Research Ireland: this independent 4-years grant aims to fund high-risk and high-reward research and will allow her to develop novel explainable and human-centered neuro-symbolic AI approaches in diagnostic imaging.

Understanding Neuron’s Activations via Knowledge Graphs for Human-Centered Explainability

Alessandra Mileo
10:00 – 11:00

Portrait. Alessandra MileoNeuro-Symbolic AI is becoming a fast-growing area of research. However, there is still a lot of potential for leveraging neuro-symbolic approaches to address the need for explainability and confidence. These are key requirements when it comes to using AI to support human experts in high-stake decision making. In this talk I will discuss how Neuron’s activation analysis and Knowledge Graphs and Deductive Reasoning can be used as key ingredients for the design of a neurosymbolic cycle for Human-centered explainability. I will discuss challenges in the design of such a cycle as well as opportunities for the adoption of Neuro-Symbolic AI in real world scenarios, using the field of Radiology as a reference scenario.

Portrait. Alessandra MileoAlessandra Mileo is Associate Professor in the School of Computing, Dublin City University. She is also a Principal Investigator in the Research Ireland Centre for Data Analytics and Funded Investigator in the Advanced Manufacturing Research Centre, and a Fellow of the Higher Education Academy (FHEA). Dr. Mileo has secured over 1.5 million euros in funding including national, international (EU, NSF) and industry-funded projects, publishing 100+ papers and is an active PC member of over 20 conferences and journals. Dr. Mileo is a member of the European AI alliance, the Italian Association for AI (AIIA), AAAI, the Association of Logic Programing (ALP) and a Steering Committee member of the Web Reasoning and Rule Systems Association (RRA) since 2015. Her current research agenda is focused on Explainable Artificial Intelligence, specifically leveraging Neuro-Symbolic Learning and Reasoning as well as Knowledge Graphs to support high-stake Decision Making. Dr. Mileo has recently been awarded the national Frontiers for the Future Project grant from Research Ireland: this independent 4-years grant aims to fund high-risk and high-reward research and will allow her to develop novel explainable and human-centered neuro-symbolic AI approaches in diagnostic imaging.

Postersession

11:45 – 13:00


Deep Parameterized Logics as a Foundation for Neurosymbolic AI

Vincent Derkinderen, Giuseppe Marra
9:00 – 10:15 | Part 1
10:45 – 12:00 | Part 2

Portrait. Giuseppe MarraPortrait. Vincent DerkinderenNeurosymbolic AI aims to combine the strengths of learning and reasoning, but the field currently consists of many seemingly different approaches. This diversity makes it difficult to understand how existing methods relate to each other and how to design new neurosymbolic systems. In this tutorial, we present a unifying perspective showing that many neurosymbolic approaches can be understood through a shared formal framework based on deep parameterized logics. This perspective highlights common design dimensions underlying existing systems and clarifies how learning and logical reasoning interact. We connect these ideas to foundational concepts such as algebraic model counting and arithmetic circuits, and illustrate how they enable the principled construction of neurosymbolic systems in practice.

Portrait. Vincent Derkinderen

Vincent Derkinderen is a postdoctoral researcher in the Declarative Languages and Artificial Intelligence (DTAI) research  group at KU Leuven. His research focus is on the foundations of neurosymbolic AI and the integration of neural learning with logical and probabilistic reasoning. His work builds on expertise in knowledge compilation and (weighted) model counting, including their applications to probabilistic logic programming and efficient probabilistic inference. More recently, he studies how these foundations extend to neurosymbolic settings, contributing to unifying formalisms such as deep parameterized logics and the DeepLog framework.

Portrait. Giuseppe Marra

Giuseppe Marra is an Assistant Professor in the Declarative Languages and Artificial Intelligence (DTAI) research group at KU Leuven, where he co-leads the Neurosymbolic AI Lab with Prof. Luc De Raedt. His research focuses on the integration of neural computation and symbolic reasoning, with an emphasis on logical and probabilistic methods for neurosymbolic AI. He has contributed to several influential neurosymbolic frameworks and works on foundations and applications of neurosymbolic learning in areas such as concept based models and safe reinforcement learning.

Conjecturing in Mathematical Reasoning: From Symbolic to Neuro-Symbolic Methods

Moa Johansson
13:30 – 14:30

Portrait of Moa JohanssonLearning to construct new, interesting, and useful lemmas is an important and long-standing challenge in AI for mathematical reasoning, albeit less explored than automating reasoning itself. Historically, various symbolic and heuristic methods has been proposed, and the recent developments in generative AI has opened new opportunities to use neuro-symbolic architectures.

In this lecture, I will give an overview of the field, and in particular talk about some recent research on neuroscience-symbolic lemma conjecturing for proof assistants and formalised mathematics. Mathematicians and computer scientists are increasingly using proof assistants to formalize and check correctness of complex proofs. This is a non-trivial task in itself, however, with high demands on human expertise. Can we lower the bar by introducing automation for conjecturing helpful, interesting and novel lemmas? Automatically discovered lemmas can both aid a human user working on a mathematical formalization, strengthen automated theorem provers and perhaps also become useful in agentic workflows.

Portrait of Moa JohanssonMoa Johansson is an associate professor at Chalmers University of Technology in Gothenburg, Sweden working on neuro-symbolic methods for mathematics, automated reasoning and formal methods in addition to application areas such as cognitive science and language. She did her PhD at the University of Edinburgh and has a longstanding interest in systems capable of assisting with creative steps of mathematical discovery, such as suggesting interesting, novel and useful lemmas in formalisations of maths.

Connecting the Dots

15:00 – 15:30

LLM Knowledge Materialization

Yujia Hu, Simon Razniewski
15:30 – 17:00

This hands-on session introduces structured knowledge elicitation from large language models (based on the GPTKB approach). After a brief conceptual overview, participants will experiment with a simple BFS-style prompting setup to extract knowledge from an LLM, exploring how prompts, seed entities, and depth affect the results.
Using this naive approach as a starting point, the session highlights key challenges such as entity recognition, duplication, and schema variability. Participants will then iteratively improve their extraction pipeline by adding fixes (e.g., NER, schema constraints) and examining real GPTKB data to identify typical error modes. The session provides a practical, experiment-driven understanding of both the potential and limitations of large-scale knowledge extraction from LLMs.

Photo. Yujia Hu.Yujia Hu is currently a PhD student supervised by Prof. Simon Razniewski, focusing on knowledge graphs and large language models. She previously earned a Bachelor’s degree in Communication Engineering from Shandong University and a Master’s degree in Information Systems Engineering from TU Dresden.

 

Photo. Prof. Simon Razniewski.
Read more about Simon Razniewski here.

Navigating in Latent Space and Retrieval Augmented Generation

Robert Haase
15:30 – 17:00

Photo. Robert Haase.Dr. Robert Haase is computer scientist by training and has a track record in artificial intelligence, image anaylsis, data science and data management. His current research focus are autonomous AI Systems for data analysis.
After studying computer science at the University of Applied Sciences Dresden, he received a doctoral degree for his work on “Optimisation and Validation of a Swarm Intelligence based Segmentation Algorithm for low Contrast Positron Emission Tomography” from the Medical Faculty Carl Gustav Carus of the University of Technology Dresden, Germany. He served as Bio Image Analyst in the Scientific Computing Facility of the Max Planck Institute for Cell Biology and Genetics in Dresden, where he also did a PostDoc on GPU-accelerated image processing and smart microscopy im Gene Myers lab. He also served as group leader for Bio Image Analysis Technology Development at the Cluster of Excellence “Physics of Life” at TU Dresden, before eventually joining ScaDS.AI Dresden/Leipzig at Leipzig University, where he works today as Group Leader and Training Coordinator.


Commonsense Reasoning Foundations and State of the Art

Filip Ilievski
9:00 – 10:15

Portraitof Filip IlievskiCommonsense reasoning is a longstanding challenge, often called the “dark matter” of AI. This session will introduce relevant aspects of naive physics and folk psychology, as well as efforts to organize commonsense axioms, define dimensions of knowledge, and systematize reasoning. We will cover neural, symbolic, and hybrid (neuro-symbolic) techniques that promise robustness and explainability. We will discuss relevant tasks and benchmarks for commonsense reasoning. Finally, we will discuss the contextual defeasibility of commonsense reasoning and reflect on its universality from a moral and cultural perspective.

Abstraction in AI

Filip Ilievski
10:45 – 12:00

Abstraction is the act of distilling experiences into an abstract schema, which can be used for more efficient learning, generalization to new scenarios, or explanation to humans. Abstraction is an interdisciplinary challenge, with extensive work in cognitive psychology, linguistics, perception, and AI. This session will discuss different notions of abstraction from an AI perspective. We will cover typical abstraction mechanisms and representations in AI, and their suitability in various scenarios. We will review studies on the ability of foundation models to perform adequate abstraction and contrast it with that of humans. Then, we will dive deeper into analogical abstraction. Finally, we will discuss the relationship between abstraction, context, and ambiguity.

Portraitof Filip IlievskiFilip Ilievski is an Assistant Professor at VU Amsterdam and a scientist at USC’s Information Sciences Institute. His research focuses on human-centric AI, specializing in commonsense reasoning, neurosymbolic methods, and analogy. Currently, he leads the NWO-funded “Human-Centric AI with Common Sense” project and а commonsense AI lab with 9 members. With over 100 publications and two books, Filip has developed foundational resources like the CommonSense Knowledge Graph (CSKG), methods for robust and explainable reasoning, and theory-aligned benchmarks for abstraction and reasoning. He holds leadership roles at the ELLIS Unit Amsterdam and the Digital Sustainability Center, and his work on visual abstraction was recently featured by the BBC.

Connecting the Dots

13:30 – 14:00

Towards Unlocking Industrial Potential of Neuro-Symbolic AI at Bosch: Opportunities and Challenges

Daria Stepanova
14:00 – 15:00

Neuro-symbolic AI combines the data-driven strengths of machine learning with the logical reasoning and transparency of symbolic systems, offering a transformative approach for industrial applications. While significant progress has been made in academic research, industrial adoption remains in its early stages. In this talk, I will present our journey at Bosch in trying to bridge the gap between research and real-world use cases, focusing on the combination of Answer Set Programming (ASP) and Knowledge Graphs with machine learning methods, e.g., large language models. I will present our attempts to apply these hybrid AI approaches in diverse domains such as conceptual system configuration, production optimization, and market analysis, while also highlighting key open research questions. Part of the work presented in this talk is the result of a collaboration between Bosch and the Vienna University of Technology.

funded by:
Gefördert vom Bundesministerium für Bildung und Forschung.
Gefördert vom Freistaat Sachsen.