This talk motivates Neural Architecture Search (NAS) with an impact of Deep Learning in a wide variety of research and engineering applications that require image recognition, speech recognition, machine translation, autoencoders, or time series analysis techniques. NAS is a sub field of AutoML and it has also significant overlap with hyperparameter optimization and meta-learning. Hence, this talk will illustrate the key challenge in Deep Learning that remains to find the right novel neural architectures that include the setup of a high number of meta-parameters and configuration options for a given application problem. Today, these Deep Neural architectures have mostly been developed manually by human experts that in turn is not only compute-intensive and time-consuming but also an error-prone process. NAS can be framed as a reinforcement learning problem whereby the generation of a specific neural architecture for application problems can be considered to be an agents action with the agents’ space identical to the search space not neglecting the search strategy, or performance estimation strategy. This talk will discuss as an automated neural architecture search method that can leverage High-Performance Computing (HPC) and cloud computing resources.