Home // Apache Beam and the price of unification
Type of thesis: Masterarbeit / location: Leipzig / Status of thesis: Finished theses
A lot of Big Data technologies emerged in the past years. Frameworks like Apache Spark or Apache Flink are building a strong analytical backbone for distributed processing. However, with cloud services and other technologies like Apache Apex around the corner, a unified data model which is capable of interchanging the execution engine beneath becomes more and more relevant for industrial use.
„We firmly believe Apache Beam is the future of streaming and batch data processing. We hope it will lead to a healthy ecosystem of sophisticated runners that compete by making users happy, not by maximizing market share via API lock in. The most capable runners at this point in time are clearly Google Cloud Dataflow (for running on GCP) and Apache Flink (for on-premises and non-Google cloud), as detailed by the capability matrix recently published. But others are catching up, and the industry as whole is shifting towards supporting the semantics in the Beam Model. This will only lead to more good things for Apache Beam users down the road.“
But what is the cost of this abstraction layer? Is Beam an architecture which replaces the need for Flink/Spark code in general or is it still beneficial to translate the Beam code after a prototype phase into the execution engine code?
In this thesis we want to evaluate the strength of Apache Beam by benchmarking it against common execution engines.
Dr. Eric Peukert(email@example.com)
Department of computer science
ScaDS.AI Dresden/Leipzig (Center for Scalable Data Analytics and Artificial Intelligence) is a center for Data Science, Artificial Intelligence and Big Data with locations in Dresden and Leipzig.
Chemnitzer Str. 46b,
Copyright 2022 © ScaDS.AI Dresden/Leipzig – All rights reserved.