An empirical comparison between MapReduce and Spark : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Sciences at Massey University, Auckland, New Zealand
Nowadays, big data has become a hot topic around the world. Thus, how to store, process and analysis this big volume of data has become a challenge to diﬀerent companies. The advent of distributive computing frameworks provides one eﬃcient solution for the problem. Among the frameworks, Hadoop and Spark are the two that widely used and accepted by the big data community. Based on that, we conduct a research to compare the performance between Hadoop and Spark and how parameters tuning can aﬀect the results.
The main objective of our research is to understand the diﬀerence between Spark and MapReduce as well as ﬁnd the ideal parameters that can improve the eﬃciency. In this paper, we extend a novel package called HiBench suite which provides multiple workloads to test the performance of the clusters from many aspects. Hence, we select three workloads from the package that can represent the most common application in our daily life: Wordcount (aggregation job),TeraSort (shuﬄe/sort job) and K-means (iterative job). Through a large number of experiments, we ﬁnd that Spark is superior to Hadoop for aggreation and iterative jobs while Hadoop shows its advantages when processing the shuﬄe/sort jobs. Besides, we also provide many suggestions for the three workloads to improve the eﬃciency by parameter tuning. In the future, we are going to further our research to ﬁnd out whether there are some other factors that may aﬀect the eﬃciency of the jobs.