Status: open / Type of Theses: Master theses / Location: Leipzig
The concept of Machine Unlearning (MU) has gained popularity in various domains due to
its ability to address several issues in Machine Learning (ML) models, particularly those
related to privacy, security, bias mitigation, and adaptability. MU enables AI systems to
selectively remove and forget data from trained models without retraining the models from
scratch. This may involve removing sensitive, undesirable, harmful, unethical, or/and
outdated content from the models’ memory. With this capability, MU not only has the
potential to support privacy regulations such as the “Right to Be Forgotten” but is
evolving into a promising technology in upholding Responsible AI principles and fostering
Adaptive AI. However, despite its promising potential, particularly in educational settings
that are often dynamic and rely on sensitive data, the MU concept has not received much
attention in the sector. Investigating MU’s application may open new pathways toward
more responsible, trustworthy, and adaptive AI-driven educational systems.
Based on the MU4RAAI (Machine Unlearning for Responsible and Adaptive AI)
framework, this work attempts to explore the potential of MU in its dual capacity to serve
as (a) a practical mechanism for operationalizing Responsible AI principles and as (b) an
essential tool for Adaptive AI within the educational application domain.
For a master’s thesis, the student can opt for one of the following focus areas: