Deep Learning for small devices and FPGAs
With the rise of Big Data and cheaply available computation power, Neural Networks have exceeded state-of-the art results in many practical areas such as speech recognition or image classification. Even though Neural Networks have been around for over 70 years, this success is quite recent and somewhat sudden. Reasons for this success lie not only in more data and in more computation power available, but also in a more engineering-style approach towards Neural Networks in general. This lead to deep models with up to 128 layers and more than billions of parameters. Today’s approaches for training and deploying these large models are mostly based on energy hungry GPUs and thus are not suitable for embedded devices found in e.g. self-driving cars.
In this talk, we want to discuss this recent rise of Neural Networks as part of the Deep Learning approach. The first half of the talk will cover the basics about (Convolutional) Neural Networks. The second half of the talk will explore the possibilities of learning and deploying (large) Neural Networks on small, embedded devices. A specific focus will be laid on Field-Programmable Gate Arrays (FPGAs) as a fast, but less energy demanding computation architecture for Deep Learning.
Sebastian Buschjäger is a PhD candidate at the artificial intelligence group of the TU Dortmund University, Germany. His main research is about resource efficient machine learning algorithms and specialized hardware for machine learning. He currently focuses on applying ensemble methods on small devices and/or specialized hardware such as FPGAs. Sebastian studied Computer Science and electrical Engineering at TU Dortmund University starting in the pupils-program and received his B.Sc. in 2014 and M.Sc. in 2016.