A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench

dc.citation.issue1
dc.citation.volume7
dc.contributor.authorAhmed N
dc.contributor.authorBarczak ALC
dc.contributor.authorSusnjak T
dc.contributor.authorRashid MA
dc.date.available14/12/2020
dc.date.issued14/12/2020
dc.description.abstractBig Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured.
dc.description.publication-statusPublished
dc.identifierhttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000599799400001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=c5bb3b2499afac691c2e3c1a83ef6fef
dc.identifierARTN 110
dc.identifier.citationJOURNAL OF BIG DATA, 2020, 7 (1)
dc.identifier.doi10.1186/s40537-020-00388-5
dc.identifier.eissn2196-1115
dc.identifier.elements-id436695
dc.identifier.harvestedMassey_Dark
dc.identifier.urihttps://hdl.handle.net/10179/16008
dc.publisherBioMed Central Ltd
dc.relation.isPartOfJOURNAL OF BIG DATA
dc.subjectHiBench
dc.subjectBigData
dc.subjectHadoop
dc.subjectMapReduce
dc.subjectBenchmark
dc.subjectSpark
dc.subject.anzsrc08 Information and Computing Sciences
dc.titleA comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench
dc.typeJournal article
pubs.notesNot known
pubs.organisational-group/Massey University
pubs.organisational-group/Massey University/College of Sciences
pubs.organisational-group/Massey University/College of Sciences/School of Food and Advanced Technology
pubs.organisational-group/Massey University/College of Sciences/School of Mathematical and Computational Sciences
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
A comprehensive performance analysis of Apache Hadoop.pdf
Size:
1.91 MB
Format:
Adobe Portable Document Format
Description:
Collections