Status: open / Type of Theses: Master theses / Location: Dresden
Recent advances in language modeling have demonstrated the power of large pre-trained models to generalize across domains. In contrast, models for graph representation learning, particularly those applied to Knowledge Graphs (KGs), often struggle to generalize due to structural and relational heterogeneity between graphs. Unlike text, graph structures lack a canonical linear form, and different KGs may follow different topologies, vocabularies, or domain-specific patterns. This poses a significant challenge for designing models that can generalize across KGs in the same or different domains.
To address this, Graph Foundation Models (GFMs) have emerged as a promising direction. Similar to LLMs, GFMs are trained on diverse graph datasets to learn general-purpose representations. Models such as Ultra and Motif have shown that it is possible to learn subgraph-level representations that transfer across different KGs, enabling reasoning and inference over unseen entities and relations.
This master’s thesis targets the development and advancement of Graph Foundation Models for KG reasoning. The core goals include:
This thesis offers the opportunity to contribute to a cutting-edge and growing area of research at the intersection of graphs, AI, and foundational models.
[1] Galkin, Mikhail, Xinyu Yuan, Hesham Mostafa, Jian Tang, and Zhaocheng Zhu. “Towards foundation models for knowledge graph reasoning.” arXiv preprint arXiv:2310.04562 (2023).
[2] Huang, Xingyue, Pablo Barceló, Michael M. Bronstein, Ismail Ilkan Ceylan, Mikhail Galkin, Juan L. Reutter, and Miguel Romero Orth. “How Expressive are Knowledge Graph Foundation Models?.” arXiv preprint arXiv:2502.13339 (2025).
[3] Galkin, Mikhail, Jincheng Zhou, Bruno F. Ribeiro, Jian Tang, and Zhaocheng Zhu. “Zero-shot logical query reasoning on any knowledge graph.” CoRR (2024).
[4] Xia, Lianghao, and Chao Huang. “Anygraph: Graph foundation model in the wild.” arXiv preprint arXiv:2408.10700 (2024).