Browsing by Author "Rashid MA"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- ItemA comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench(BioMed Central Ltd, 14/12/2020) Ahmed N; Barczak ALC; Susnjak T; Rashid MABig Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured.
- ItemA Machine Learning Approach to Enhance the Performance of D2D-Enabled Clustered Networks(IEEE, 20/01/2021) Aslam S; Alam F; Hasan SF; Rashid MAClustering has been suggested as an effective technique to enhance the performance of multicasting networks. Typically, a cluster head is selected to broadcast the cached content to its cluster members utilizing Device-to-Device (D2D) communication. However, some users can attain better performance by being connected with the Evolved Node B (eNB) rather than being in the clusters. In this article, we apply machine learning algorithms, namely Support Vector Machine, Random Forest, and Deep Neural Network to identify the users that should be serviced by the eNB. We therefore propose a mixed-mode content distribution scheme where the cluster heads and eNB service the two segregated groups of users to improve the performance of existing clustering schemes. A D2D-enabled multicasting scenario has been set up to perform a comprehensive simulation study that demonstrates that by utilizing the mixed-mode scheme, the performance of individual users, as well as the whole network, improve significantly in terms of throughput, energy consumption, and fairness. This study also demonstrates the trade-off between eNB loading and performance improvement for various parameters.
- ItemEffects of modulation techniques RZ, NRZ, and CSRZ on the operation of hybrid OCDMA/WDM system for gigabit passive optical networks(2017 IAMOT, 2017-07) Ahmed N; Rashid MAIn this paper, the performance of hybrid optical code division multiple access/wavelength division multiplexing (OCDMA/WDM) system is evaluated for gigabit passive optical network (GPON). We have investigated, compared and analyzed various modulation techniques for 5 km distance with channel transmission rates at 2.5Gbps and 5 Gbps for OCDMA and WDM respectively. The Enhance Double Weight (EDW) code is used as a signature address for this system for studying the system limitations, benefits, and capabilities in order to transmit signal and handle high data traffic for the future multi gigabit optical networks. Simulation results revealed that Non-return to Zero (NRZ) modulation format provides better performance considering Bit-Error-Rate of 10E-13 and 11.608 dBm received optical power. The overall system performance using NRZ is increased by 17% and 33% against RZ and CSRZ.
- ItemEnterprise systems maturity: A practitioners' perspective(Association for Information Systems, 2009) Mathrani S; Viehland D; Rashid MAOrganizations continue to adopt enterprise systems (ES) technology to reduce costs and improve processes with the aim of achieving business benefits. The purpose of this study is to examine the utilization of ES technology and its information by New Zealand (NZ) organizations and their ability to derive benefits. The study does so by exploring (a) how ES data are transformed into knowledge, (b) how this knowledge is utilized to achieve benefits within NZ organizations, and (c) critical success factors for this process. This study gains insights through a "practitioners' perspective" of ES vendors, ES consultants, and IT research firms in a NZ context. Key findings indicate that although many ES implementations in New Zealand are several years old, companies have only recently started tracking benefits through analytical processes to optimize and realize business value from their enterprise systems investment.
- ItemExperimental Performance Analysis of a Scalable Distributed Hyperledger Fabric for a Large-Scale IoT Testbed(MDPI (Basel, Switzerland), 2022-07) Pajooh HH; Rashid MA; Alam F; Demidenko SBlockchain technology, with its decentralization characteristics, immutability, and traceability, is well-suited for facilitating secure storage, sharing, and management of data in decentralized Internet of Things (IoT) applications. Despite the increasing development of blockchain platforms, there is still no comprehensive approach for adopting blockchain technology in IoT systems. This is due to the blockchain’s limited capability to process substantial transaction requests from a massive number of IoT devices. Hyperledger Fabric (HLF) is a popular open-source permissioned blockchain platform hosted by the Linux Foundation. This article reports a comprehensive empirical study that measures HLF’s performance and identifies potential performance bottlenecks to better meet the requirements of blockchain-based IoT applications. The study considers the implementation of HLF on distributed large-scale IoT systems. First, a model for monitoring the performance of the HLF platform is presented. It addresses the overhead challenges while delivering more details on system performance and better scalability. Then, the proposed framework is implemented to evaluate the impact of varying network workloads on the performance of the blockchain platform in a large-scale distributed environment. In particular, the performance of the HLF is evaluated in terms of throughput, latency, network size, scalability, and the number of peers serviceable by the platform. The obtained experimental results indicate that the proposed framework can provide detailed real-time performance evaluation of blockchain systems for large-scale IoT applications.
- ItemIoT Big Data provenance scheme using blockchain on Hadoop ecosystem(BioMed Central Ltd, 2021-12) Honar Pajooh H; Rashid MA; Alam F; Demidenko SThe diversity and sheer increase in the number of connected Internet of Things (IoT) devices have brought significant concerns associated with storing and protecting a large volume of IoT data. Storage volume requirements and computational costs are continuously rising in the conventional cloud-centric IoT structures. Besides, dependencies of the centralized server solution impose significant trust issues and make it vulnerable to security risks. In this paper, a layer-based distributed data storage design and implementation of a blockchain-enabled large-scale IoT system are proposed. It has been developed to mitigate the above-mentioned challenges by using the Hyperledger Fabric (HLF) platform for distributed ledger solutions. The need for a centralized server and a third-party auditor was eliminated by leveraging HLF peers performing transaction verifications and records audits in a big data system with the help of blockchain technology. The HLF blockchain facilitates storing the lightweight verification tags on the blockchain ledger. In contrast, the actual metadata are stored in the off-chain big data system to reduce the communication overheads and enhance data integrity. Additionally, a prototype has been implemented on embedded hardware showing the feasibility of deploying the proposed solution in IoT edge computing and big data ecosystems. Finally, experiments have been conducted to evaluate the performance of the proposed scheme in terms of its throughput, latency, communication, and computation costs. The obtained results have indicated the feasibility of the proposed solution to retrieve and store the provenance of large-scale IoT data within the Big Data ecosystem using the HLF blockchain. The experimental results show the throughput of about 600 transactions, 500 ms average response time, about 2–3% of the CPU consumption at the peer process and approximately 10–20% at the client node. The minimum latency remained below 1 s however, there is an increase in the maximum latency when the sending rate reached around 200 transactions per second (TPS).
- ItemRuntime prediction of big data jobs: performance comparison of machine learning algorithms and analytical models(BioMed Central Ltd, 2022-12) Ahmed N; Barczak ALC; Rashid MA; Susnjak TDue to the rapid growth of available data, various platforms offer parallel infrastructure that efficiently processes big data. One of the critical issues is how to use these platforms to optimise resources, and for this reason, performance prediction has been an important topic in the last few years. There are two main approaches to the problem of predicting performance. One is to fit data into an equation based on a analytical models. The other is to use machine learning (ML) in the form of regression algorithms. In this paper, we have investigated the difference in accuracy for these two approaches. While our experiments used an open-source platform called Apache Spark, the results obtained by this research are applicable to any parallel platform and are not constrained to this technology. We found that gradient boost, an ML regressor, is more accurate than any of the existing analytical models as long as the range of the prediction follows that of the training. We have investigated analytical and ML models based on interpolation and extrapolation methods with k-fold cross-validation techniques. Using the interpolation method, two analytical models, namely 2D-plate and fully-connected models, outperform older analytical models and kernel ridge regression algorithm but not the gradient boost regression algorithm. We found the average accuracy of 2D-plate and fully-connected models using interpolation are 0.962 and 0.961. However, when using the extrapolation method, the analytical models are much more accurate than the ML regressors, particularly two of the most recently proposed models (2D-plate and fully-connected). Both models are based on the communication patterns between the nodes. We found that using extrapolation, kernel ridge, gradient boost and two proposed analytical models average accuracy is 0.466, 0.677, 0.975, and 0.981, respectively. This study shows that practitioners can benefit from analytical models by being able to accurately predict the runtime outside of the range of the training data using only a few experimental operations.