AI Courses provided in Summer Semester 2024

Courses provided by Prof. Dr.-Ing. Michael Färber

#Course TitleModulesSWS
Erweitertes Komplexpraktikum Vertrauenswürdige Künstliche Intelligenz INF-B-510, INF-B-520, INF-MA-PR 0/0/8
2.Forschungsprojekt Vertrauenswürdige Künstliche IntelligenzINF-PM-FPA, INF-PM-FPG0/0/8
Forschungsprojekt Vertrauenswürdige Künstliche Intelligenz (CMS)CMS-PRO0/0/12
Großer Beleg Skalierbare Software-Architekturen für Data Analytics0/0/0
5.Komplexpraktikum Vertrauenswürdige Künstliche IntelligenzINF-B-510, INF-B-520, INF-MA-PR0/0/4
6.Teamprojekt Vertrauenswürdige Künstliche Intelligenz (CMS)CMS-LM-AI0/0/8

See also the TU Dresden course catalog. Potential topics are provided below.

How to join

On April 19, there will be a kick-off meeting in which all topics are presented in detail (see below). Afterwards, the students are assigned to the topics. Each topic is carried out in a team of around 3 students.

Interested in joining one of the courses? Then write an email to Prof. Färber at by April 18 8pm, indicating your topic preference and your course format (see the courses 1-6 above).

Kick-off event:
When: April 19, 14:00h
Where: Hörsaalzentrum 2. OG, Room 208,
If you cannot participate but are still interested in joining a course, please write an email.

Opportunities for SHK positions related to these topics are also available.

Potential topics of the above-mentioned courses

Topic 1: Designing and Executing a Large-Scale User Study on Scientific Text Simplification

This topic is about making scientific texts more understandable. The goal is to automatically rewrite or translate academic articles so they become clear not only to field experts but also to researchers from other disciplines and interested laypeople. This role involves planning and conducting a user study. It’s a unique chance to actively engage in a project that could transform how we interact with scientific knowledge. You’ll gain experience in research methodology and user study design, directly contributing to making science more accessible to a broader audience.

What are the tasks?
• Designing a comprehensive user study (e.g., select and preprocess the scientific texts for the participants).
• Collecting and analyzing the response from the participants (e.g., who summarized or simplified scientific texts in their own words) so that the responses can be used as a ground truth (“perfect texts”) for AI models.
• If you have experience with programming: Implement generative AI models (e.g., GPT-based) which can summarize or simplify scientific texts automatically.

What prerequisites do you need?
• A passion for making scientific knowledge accessible to a broader audience.
• Strong interest in research, user study design, and data analysis.
• Good organizational skills to effectively design and manage a large user study (funded by the chair).
• Basic programming skills.

Topic 2: Stock Market Predictions through Deep Learning

This topic focuses on a collaboration with Orca Capital, a company specializing in financial markets. Together with Orca Capital, a Munich-based startup has developed a runing system that predicts the stock prices of certain companies based on a continuous stream of news, such as rising/falling prices and volatility. This system utilizes deep learning and natural language processing methods, including pretrained language models. The students will work on further developing and enhancing the system, using real-world financial data and industry contacts. Possible enhancements include applying the latest language models (LLMs) and techniques to make the predictions more explainable (explainable AI).

What are the tasks?

  • Developing extensions and improvements of the system, using the latest findings in deep learning and natural language processing.
  • Evaluating the system’s performance and making its predictions more interpretable, integrating methods from the field of explainable AI.

What prerequisites do you need?

  • Good programming skills in Python.

Topic 3: Large Language Model-enhanced Graph Message Passing Network for Link Prediction

This topic is about advancing AI-based recommendation methods through the integration of large language models and graph message passing networks. The project aims to revolutionize how we predict and understand linkages within academic citation networks.

What are the tasks?
• Implementing and testing algorithms for link prediction, community detection, node classification, and potentially other graph-supervised learning tasks.
• Exploring the trade-offs between the utilization of textual and structural features in link prediction algorithms, and devising methods to efficiently combine these features.

What prerequisites do you need?
• A strong interest in machine learning, natural language processing, or graph theory.
• Proficiency in programming, preferably in Python, with experience in PyTorch or TensorFlow.
• Eagerness to engage with state-of-the-art research in link prediction and text mining.

Topic 4: Extending the RDF Knowledge Graph

This topic is about working on SemOpenAlex, a comprehensive RDF knowledge graph that includes over 26 billion triples related to scientific publications, authors, institutions, journals, and more. This open-access initiative offers data through RDF dump files, a SPARQL endpoint, and the Linked Open Data cloud, enhancing the visibility and accessibility of scientific research.

What are the tasks?
• Keeping SemOpenAlex up-to-date by updating its schema according to changes in the OpenAlex database and performing periodic updates to the RDF database.
• Expanding SemOpenAlex, e.g., by introducing author name disambiguation, integrating representations of code repositories like GitHub, and linking to other databases and knowledge graphs such as, Wikidata, and DBLP.

What prerequisites do you need?
• Basic understanding of RDF and enthusiasm for semantic web and open data.
• Programming skills in Python, which are critical for various tasks including database maintenance and development.

Topic 5: Fusing RDF Knowledge Graphs with Deep Learning for Advanced Recommender Systems

This project seeks to expand AutoRDF2GML, an open-source framework acclaimed for converting RDF data into specialized representations ideal for cutting-edge graph machine learning (GML) tasks, including graph neural networks (GNNs). With its automatic extraction of both content-based and topology-based features from RDF knowledge graphs, AutoRDF2GML simplifies the process for those new to RDF and SPARQL, making semantic web data more accessible and usable in real-world applications.

What are the tasks?
• Adapt AutoRDF2GML to process a broader range of RDF knowledge graphs, allow a flexible integration of data sources from the Linked Open Data cloud.
• Redesign the AutoRDF2GML interface to be more intuitive and user-friendly, enabling a seamless experience for both new and experienced users.
• Boost the framework’s automation capabilities to simplify the setup and execution processes, making it easier to generate and use graph machine learning datasets efficiently.

What prerequisites do you need?
• Proficiency in Python, with a foundational understanding of RDF, SPARQL, and graph machine learning concepts.
• An enthusiastic interest in the intersection of semantic web technologies and deep learning.