The optimization of hyperparameters is an important task when applying neural networks as well as other machine-learning methods and classical simulations. Examples for hyperparameters of neural networks are the type of network to be used, the number of its layers, the number of neurons in a layer, number and size of filters, learning rate, batch size, type of activation functions and many more. Finding an appropriate set of hypterparameters is crucial for the accuracy and performance of an application and is usually a very time-consuming an tedious task if done manually.
In this training the hyperparameter optimization tool OmniOpt is introduced. OmniOpt is a tailor-made solution for the high performance computing (HPC) cluster Taurus of TU Dresden which allows to optimize the hyperparameters of a wide range of problems. The use of the HPC system Taurus with its vast resources of GPUs, CPUs and storage assures that even large problems may be handled within a moderate computation time. Moreover, there is a variety of tools for the automatic analysis and graphical representation of the optimization results.
Aim of this training is to enable the participants to use OmniOpt on their own. It will contain (1) a short general introduction into hyperparameter optimization, (2) a brief introduction in the use of the HPC system Taurus, (3) an extended introduction into OmniOpt using a hands-on example and (4) evaluating the results of the optimization by the OmniOpt toolkit. The training is suitable for researchers as well as for students with basic knowledge in Linux and a command-line based programming language, e.g. Python or C. Researchers who bring their own code to be optimized, are highly welcome. This is, however, not a prerequisite.
Title: Efficient Parallel Hyperparameter Optimization with OmniOpt on HPC
Speakers: Dr. Peter Winkler, Norman Koch
Next Session: 09.05.2023, 10 a.m. – 3:30 p.m.
Target Group: Researchers and Students
Format: Tutorial on site: Andreas-Pfitzmann-Bau (APB-1020)
Participation is free of charge.
Add this event to your calendar (iCal).
After a general introduction into HPC and OmniOpt, the main part of the training will be a hands-on session, where the participants can try out the tool in practice.
This training is suitable for researchers as well as for students with basic knowledge in Linux and a command-line based programming language, e.g. Python or C. Researchers who bring their own code to be optimized, are highly welcome.
Do you have any questions about the tutorial Efficient Parallel Hyperparameter Optimization with OmniOpt on HPC? Don’t hesitate to contact our team!
Check out the other trainings by ScaDS.AI Dresden/Leipzig.