Interpretable AI for Classification Learning – Towards Model Trustworthiness and Plausibility

On 04.08.2022 at 11:00 a.m. the 13th lecture of the Living Lab lecture series took place. In this talk, Prof. Dr. Thomas Villmann talked about Interpretable AI for Classification Learning – Towards Model Trustworthiness and Plausibility. Add this event to your calendar (iCal).

Interpretable and Robust Classification Learning by Learning Vector Quanteizers

After the overwhelming success of deep networks the need for smart classification models is increasingly providing an alternative in cases with hardware constraints. Further, interpretability is frequently demanded and leads to a better acceptance of machine learning tools. Additionally, guarantees for robustness should provide classification certainty and model confidence.

In this talk, we reflected current developments of the learning vector quantization model, which was originally introduced by T. Kohonen in the 80s of the last century but mathematically justified and significantly extended during the last years. Surprisingly, these classifier models are highly flexible and adjustable for various classification tasks while providing interpretability, robustness as well as being smart models with low computational requirements. Prof. Dr. Thomas Villmann presented the most important developments and theoretical results, which ensure the required robustness, certainty and flexibility while keeping the interpretability. Selected application cases illustrated the abilities of the models.

Missed this Living Lab lecture on Interpretable AI for Classification Learning?
You can rewatch this lecture on YouTube.

YouTube

Mit dem Laden des Videos akzeptieren Sie die Datenschutzerklärung von YouTube.
Mehr erfahren

Video laden

Find out more about our Living Lab Lecture Series.

TU
Universität
Max
Leibnitz-Institut
Helmholtz
Hemholtz
Institut
Fraunhofer-Institut
Fraunhofer-Institut
Max-Planck-Institut
Institute
Max-Plank-Institut