Data Dwarfs: A Lens Towards Fully Understanding Big Data and AI Workloads
The complexity and diversity of big data and AI workloads make understanding them difficult and challenging. This paper proposes a new approach to characterizing big data and AI workloads. We consider each big data and AI workload as a pipeline of one or more classes of unit of computations performed on different initial or intermediate data inputs. Each class of unit of computation captures the common requirements while being reasonably divorced from individual implementations, and hence we call it a data dwarf. For the first time, among a wide variety of big data and AI workloads, we identify eight data dwarfs that takes up most of run time, including Matrix, Sampling, Logic, Transform, Set, Graph, Sort and Statistic. We implement the eight data dwarfs on different software stacks as the micro benchmarks of an open-source big data and AI benchmark suite, and perform comprehensive characterization of those data dwarfs from perspective of data sizes, types, sources, and patterns as a lens towards fully understanding big data and AI workloads.
READ FULL TEXT