JavaScript is required to use this site. Please enable JavaScript in your browser settings.

Supervisor

MU4RAAI: Machine Unlearning for Responsible and Adaptive AI in Education Context

Status: open / Type of Theses: Master theses / Location: Leipzig

Description

The concept of Machine Unlearning (MU) has gained popularity in various domains due to
its ability to address several issues in Machine Learning (ML) models, particularly those
related to privacy, security, bias mitigation, and adaptability. MU enables AI systems to
selectively remove and forget data from trained models without retraining the models from
scratch. This may involve removing sensitive, undesirable, harmful, unethical, or/and
outdated content from the models’ memory. With this capability, MU not only has the
potential to support privacy regulations such as the “Right to Be Forgotten” but is
evolving into a promising technology in upholding Responsible AI principles and fostering
Adaptive AI. However, despite its promising potential, particularly in educational settings
that are often dynamic and rely on sensitive data, the MU concept has not received much
attention in the sector. Investigating MU’s application may open new pathways toward
more responsible, trustworthy, and adaptive AI-driven educational systems.

Main Objective

Based on the MU4RAAI (Machine Unlearning for Responsible and Adaptive AI)
framework, this work attempts to explore the potential of MU in its dual capacity to serve
as (a) a practical mechanism for operationalizing Responsible AI principles and as (b) an
essential tool for Adaptive AI within the educational application domain.

Focus areas:

For a master’s thesis, the student can opt for one of the following focus areas:

  1. Machine Unlearning in translating Responsible AI principles into practice (with a
    focus on bias Mitigation, though other application areas like Privacy preserving and
    security can also be explored). For example:
    • Bias – unlearning biased data or correcting errors introduced during the learning
      process.
    • Privacy – selectively removing/forgetting sensitive data from trained models
    • Security – unlearning polluted or corrupted training data to mitigate the influence
      of potentially harmful data
  2. Machine Unlearning in enhancing Adaptive AI for performance optimisation (e.g MU
    in facilitating adaptability of changing data distributions)

Ideal Candidate

  • Background in Machine Learning
  • Experience or interest in Machine Unlearning
  • Enthusiastic about exploring new ideas, especially in the emerging field of Machine
    Unlearning

 

funded by:
Gefördert vom Bundesministerium für Bildung und Forschung.
Gefördert vom Freistaat Sachsen.