Recent advances in language modeling have demonstrated the power of large pre-trained
models to generalize across domains. In contrast, models for graph representation learning,
particularly those applied to Knowledge Graphs (KGs), often struggle to generalize due to
structural and relational heterogeneity between graphs. Unlike text, graph structures lack a
canonical linear form, and different KGs may follow different topologies, vocabularies, or
domain-specific patterns. This poses a significant challenge for designing models that can
generalize across KGs in the same or different domains.
To address this, Graph Foundation Models (GFMs) have emerged as a promising direction.
Similar to LLMs, GFMs are trained on diverse graph datasets to learn general-purpose
representations. Models such as Ultra and Motif have shown that it is possible to learn
subgraph-level representations that transfer across different KGs, enabling reasoning and
inference over unseen entities and relations.
Thesis Objectives
This master’s thesis targets the development and advancement of Graph Foundation Models
for KG reasoning. The core goals include
- (Preparation phase) Surveying existing GFMs and analyzing their architectures,
capabilities, and limitations: Ultra [1], Motifs [2], ULTRAQuery [3], AnyGraph [4].
- Collecting existing relational KG benchmarks for inductive and transductive KG
completion
- (Main task and thesis objective) Designing and implementing a GFM for relational
KGs that enhance model generalization and reasoning ability over existing relational
KG embedding models.
- (Model evaluation) Designing an evaluation framework for model performance on
benchmark datasets with a focus on subgraph invariance, expressiveness,
generalization to unseen entities/relations, and KG transfer learning.
Prerequisites
- Enrolled in CS or related master program at TU Dresden
- Familiarity with graph neural networks or knowledge graph embedding methods
- Programming experience in Python and PyTorch or TensorFlow.
- Interest in machine learning, representation learning, and graph-based reasoning.
This thesis offers the opportunity to contribute to a cutting-edge and growing area of research
at the intersection of graphs, AI, and foundational models.
Reference
[1] Galkin, Mikhail, Xinyu Yuan, Hesham Mostafa, Jian Tang, and Zhaocheng Zhu. “Towards
foundation models for knowledge graph reasoning.” arXiv preprint arXiv:2310.04562 (2023).
[2] Huang, Xingyue, Pablo Barceló, Michael M. Bronstein, Ismail Ilkan Ceylan, Mikhail Galkin,
Juan L. Reutter, and Miguel Romero Orth. “How Expressive are Knowledge Graph Foundation
Models?.” arXiv preprint arXiv:2502.13339 (2025).
[3] Galkin, Mikhail, Jincheng Zhou, Bruno F. Ribeiro, Jian Tang, and Zhaocheng Zhu. “Zero-
shot logical query reasoning on any knowledge graph.” CoRR (2024).
[4] Xia, Lianghao, and Chao Huang. “Anygraph: Graph foundation model in the wild.” arXiv
preprint arXiv:2408.10700 (2024)