Status: finished / Type of Theses: Seminar Theses / Location: Dresden
In recent years, machine learning (ML) has enabled significant advancements in big data-driven decision-making across various industries by providing accurate predictions and decision-making capabilities. However, the need for more transparency and interpretability in many ML models has raised concerns, especially where understanding the reasoning behind model decisions is crucial.
The citation counts for research articles on ‘Interpretable ML’ and ‘Explainable AI’ in the Web of Science show a notable, accelerating upward trend, indicative of the growing importance and interest in these topics. This research project aims to address the ‘black box’ nature of ML models by exploring techniques and methods to better understand their decisions and predictions.