Parallel Machine Learning and Deep Learning driven by HPC
The fast training of machine learning models and innovative deep learning networks from increasingly growing large quantities of scientific and engineering datasets requires high performance computing (HPC). Modern supercomputing technologies such as those developed within the European DEEP-EST project provide innovative approaches w.r.t. processing, memory, and modular supercomputing usage during training, testing, and validation processes. This lecture illustrates why and how parallel processing is a key enabler for a wide variety of machine and deep learning algorithms today. Examples include scientific and engineering applications that leverage scalable feature engineering, density-based spatial clustering of applications with noise (DBSCAN), support vector machines (SVMs) and kernel methods, convolutional neural networks (CNNs), as well as long short-term memory (LSTM) networks.
Prof. Dr. – Ing. Morris Riedel is an Adjunct Associate Professor at the School of Engineering and Natural Sciences of the University of Iceland. He received his PhD from the Karlsruhe Institute of Technology (KIT) and works in parallel and distributed systems since 15 years. He held various positions at the Juelich Supercomputing Centre of Forschungszentrum Juelich in Germany. At this institute, he is currently the head of a research group focused on ‘High Productivity Data Processing’ and a cross-sectional team on ‘Deep Learning’. His research is focussed on parallel and scalable machine learning algorithms and deep learning networks that leverage cutting edge High Performance Computing (HPC) technologies. Beyond university teaching lectures such as Statistical Data Mining, High Performance Computing (HPC), or Cloud Computing and Big Data, he has given many tutorials on machine learning and deep learning, including invited lectures at University of Ghent available on YouTube.