#9 Language Models for Code Generation

On the 7-April-2022 at 11:00 a.m. the ninth lecture of the Living Lab lecture series took place. Klaudia-Doris Thellmann and Bernhard Stadler talked about Language Models for Code Generation.

Efforts to automatically generate source code from natural language instructions to overcome the human-machine language barrier have existed since the early days of computer science. As a result, numerous approaches have emerged over the last decades, ranging from statistical methods with focus on rule induction or probabilistic grammars to artificial neural networks. In the last few years, large-scale pre-trained neural language models have shown impressive performance on a variety of natural language processing tasks and have also become the basis for a growing number of programming-related tasks such as code completion or synthesizing code from natural language descriptions.

In this talk, ScaDS.AI scientific researchers Klaudia-Doris Thellmann and Bernhard Stadler gave an overview of state-of-the-art language models for program synthesis. Furthermore, they presented basic characteristics of these models and discussed several of their limitations. A critical weakness of these language models is systematic reasoning, which is crucial for understanding the programming task at hand and generating program code. One possible direction of research that could help alleviate these limitations is the inclusion of structural knowledge – an attempt they have made in this regard and briefly introduced.

Missed this lecture?
You can rewatch this lecture on YouTube.


Mit dem Laden des Videos akzeptieren Sie die Datenschutzerklärung von YouTube.
Mehr erfahren

Video laden

Find out more about our Living Lab Lecture Series.