Big Data Overview
The efficient and intelligent handling of large, often distributed and heterogeneous data sets increasingly determines the scientific and economic competitiveness in most application areas. Mobile applications, social networks, multimedia collections, sensor networks, data intense scientific experiments and complex simulations generate nowadays a data deluge. But, processing and analyzing these data sets with innovative methods open up various new opportunities for its exploitation and new insights. The resulting resource requirements exceed the possibilities of state-of-the-art methods for the acquisition, integration, analysis and visualization of data. In recent years, many promising approaches have been developed and are available as community frameworks in the big data area to process large sets of data, which become increasingly interesting to be evaluated by domain scientists. The purpose of those frameworks spans from specialized implementations using deep learning approaches to the processing and analysis of large scale stream-based sensor data. Nowadays, sophisticated and specialized hardware options are available in the high performance computing area to provide architectures adjusted to the needs of different analytics workloads. I will discuss methods to provide shaped computing environments for big data analytics and illustrate via real-world analytics scenarios requirements for efficient provisioning of computing environments to suit best individual needs of a given workload to achieve best performance.
Wolfgang E. Nagel holds the chair for computer architecture at TU Dresden and is director of the Center for Information Services and HPC (ZIH). His research covers programming concepts and software tools to support the development of scalable and data intensive applications, analysis of computer architectures, and development of efficient parallel algorithms and methods. Prof. Nagel is chairman of the Gauß-Allianz e.V. and member of the international Big Data and Extreme-scale Computing (BDEC) project. He is leading the Big Data competence center “ScaDS – Competence Center for Scalable Data Services and Solutions Dresden/Leipzig”, funded by the German Federal Ministry of Education and Research.
Volker Markl is a Full Professor and Chair of the Database Systems and Information Management Group at the Technische Universität Berlin (TUB) and an Adjunct Full Professor at the University of Toronto. He is Director of the Intelligent Analytics for Massive Data Research Group at DFKI and Director of the Berlin Big Data Center. In addition, he serves as the Secretary of the VLDB Endowment. His current research interests include new hardware architectures for information management, scalable processing and optimization of declarative data analysis programs, and scalable data science. To date, Volker has presented over 200 invited talks in numerous industrial settings, major conferences, and research institutions worldwide. Furthermore, he has authored and published over 100 research papers at world-class scientific venues. Between 2010-2016, he was Speaker and Principal Investigator of the Stratosphere Research Unit funded by the German Research Foundation (DFG), which resulted in numerous top-tier publications, as well as the Apache Flink big data analytics system. In 2014, he was named one of Germany’s leading digital minds (Digitale Köpfe) by the German Informatics Society. Prior to joining TUB, he was a Research Staff Member and Project Leader at the IBM Almaden Research Center in San Jose, California.