Software engineering will play an increasing role in the development of AI-based systems. Vice versa, AI will be a driving factor for innovations, automation, and productivity of the software development process. The goal of the group is to address both research directions simultaneously: Improving software engineering methods with AI while building intelligent systems.
We concentrate our work on three research areas: MLOps and experimentation, support for FPGA-based ML-Accelerators, and ML-enabled compiler optimizations.
To support agile experimentation, reproducibility, and automation, we will develop an extensible experimentation platform on top of industry standards experimentation tracking frameworks. Specifically, we enable (1) reproducibility and a benchmark environment, (2) automate feedback loops from production to development, and (3) investigate social factors, such as team structures and compositions in an AI-Software team.
For FPGA support, we thrive for a high-level synthesis library for realizing ML-accelerators on FPGAs, including multi-vendor support, such as Xilinx and Intel. By following a modular, scalable, and extensible approach, we enable the fast exploration of designs as well as on- and off-chip memories and interfaces.
Finally, we investigate machine learning methods to improve optimizing compilers and software mapping methodologies. To this end we leverage graph networks to model code and iterative methods like reinforcement learning to explore the design space of valid transformations. This is enabled by a Python-based framework, called ComPy-Learn, that conveniently interface ML and state of the art compiler frameworks. A particular focus is set on optimizing for reconfigurable and emerging domain-specific accelerators.