Massey Documents by Type

Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294

Browse

Search Results

Now showing 1 - 10 of 16
  • Item
    Improving network lifetime through energy-efficient protocols for IoT applications : thesis submitted to the School of Food and Advanced Technology, Massey University New Zealand, in partial fulfilment of the requirements for the degree of Doctor of Philosophy
    (Massey University, 2022) Mishra, Mukesh
    Sensors are ubiquitous. They can be found in homes, factories, farms, and just about everywhere else. To meet distributed sensing requirements several sensors are deployed and connected on a wireless media to form a Wireless Sensor Network (WSN). Sensor nodes exchange information with one another and with a base station (BS). We begin with a review of recent work on cross-layer WSN design techniques based on the Open System Interconnection (OSI) model. The distributed sensor nodes are often grouped in clusters and a cluster head (CH) is chosen and used to route data from the sensor nodes to the BS. The thesis evaluates constraints-based routing algorithms, which choose a routing path that satisfies administrative or Quality of Service (QoS) constraints. Different algorithms reduce costs, balance network load, and improve security. Clustering sensor nodes in a wireless sensor network is an important technique for lowering sensor energy consumption and thus extending the network's lifetime. The cluster head serves as a router in a network. Furthermore, the cluster head is in charge of gathering and transmitting sensed information from cluster members to a destination node or base station/sink. To safely elect a cluster head, an efficient clustering approach is required. It continues to be an important task for overall network performance. As a result, in this study, we propose a scheme for cluster head selection based on a trust factor that ensures all nodes are trustworthy and authentic during communication. Direct trust is calculated using parameters such as residual energy and node distance. Further, K-means clustering algorithm has been employed for cluster head selection. The simulation results show that the proposed solution outperforms the LEACH (Low-Energy Adaptive Clustering Hierarchy) protocol in improving network lifetime, packet delivery ratio, and energy consumption. Furthermore, this strategy can significantly improve performance while discriminating between legitimate and malicious (or compromised) nodes in the network. The use of the IoT in wireless sensor networks (WSNs) presents substantial issues in ensuring network longevity due to the high energy requirements of sensing, processing, and data transmission. Thus, multiple conventional algorithms with optimization methodologies have been developed to increase WSN network performance. These algorithms focus on network layer routing protocols for dependable, energy-efficient communication, extending network life. This thesis proposes multi-objective optimization strategy. It calculates the optimum path for packets from the source to the sink or base station. The proposed model works in two-steps. First, a trust model selects cluster head to control data connection between the BS and cluster nodes. To determine data transmission routes, a novel hybrid algorithm is proposed that combines a particle swarm optimization (PSO) algorithm and a genetic algorithm (GA) .The obtained results validate the proposed approach's efficiency, as it outperforms existing methods in terms of increased energy efficiency, increased network throughput, high packet delivery ratio, and high residual energy across all iterations. Sensor nodes (SNs) have very constrained memory, energy, and computational resources.The limitations are further exacerbated due to the large volume of sensing data generated in a distributed IoT application . Energy can be saved by compressing data at the sensor node or CH level before transmission. The majority of data compression research has been motivated by image and video compression; however, the vast majority of these algorithms are inapplicable on sensor nodes due to memory restrictions, energy consumption, and processing speed. To address this issue, we chose established data compression techniques such as Run Length Encoding (RLE) and Adaptive Huffman Encoding (AHE), which require much less resources and can be executed on sensor nodes. Both RLE and AHE can negotiate compression ratio and energy utilisation effectively. This thesis initially evaluates RLE and AHE data compression efficiency. Hybrid-RLEAHE (H-RLEAHE) is then suggested and tested for sensor nodes. Simulations were run to validate the efficacy of the proposed hybrid algorithm, and the results were compared to compression methods using RLE, AHE, and without the use of any compression technique for five different cases. RLE data compression outperforms H-RLEAHE and AHE in energy efficiency, network performance, packet delivery ratio, and energy across all iterations.
  • Item
    Clustering algorithm for D2D communication in next generation cellular networks : thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering, Massey University, Auckland, New Zealand
    (Massey University, 2021) Aslam, Saad
    Next generation cellular networks will support many complex services for smartphones, vehicles, and other devices. To accommodate such services, cellular networks need to go beyond the capabilities of their previous generations. Device-to-Device communication (D2D) is a key technology that can help fulfil some of the requirements of future networks. The telecommunication industry expects a significant increase in the density of mobile devices which puts more pressure on centralized schemes and poses risk in terms of outages, poor spectral efficiencies, and low data rates. Recent studies have shown that a large part of the cellular traffic pertains to sharing popular contents. This highlights the need for decentralized and distributive approaches to managing multimedia traffic. Content-sharing via D2D clustered networks has emerged as a popular approach for alleviating the burden on the cellular network. Different studies have established that D2D communication in clusters can improve spectral and energy efficiency, achieve low latency while increasing the capacity of the network. To achieve effective content-sharing among users, appropriate clustering strategies are required. Therefore, the aim is to design and compare clustering approaches for D2D communication targeting content-sharing applications. Currently, most of researched and implemented clustering schemes are centralized or predominantly dependent on Evolved Node B (eNB). This thesis proposes a distributed architecture that supports clustering approaches to incorporate multimedia traffic. A content-sharing network is presented where some D2D User Equipment (DUE) function as content distributors for nearby devices. Two promising techniques are utilized, namely, Content-Centric Networking and Network Virtualization, to propose a distributed architecture, that supports efficient content delivery. We propose to use clustering at the user level for content-distribution. A weighted multi-factor clustering algorithm is proposed for grouping the DUEs sharing a common interest. Various performance parameters such as energy consumption, area spectral efficiency, and throughput have been considered for evaluating the proposed algorithm. The effect of number of clusters on the performance parameters is also discussed. The proposed algorithm has been further modified to allow for a trade-off between fairness and other performance parameters. A comprehensive simulation study is presented that demonstrates that the proposed clustering algorithm is more flexible and outperforms several well-known and state-of-the-art algorithms. The clustering process is subsequently evaluated from an individual user’s perspective for further performance improvement. We believe that some users, sharing common interests, are better off with the eNB rather than being in the clusters. We utilize machine learning algorithms namely, Deep Neural Network, Random Forest, and Support Vector Machine, to identify the users that are better served by the eNB and form clusters for the rest of the users. This proposed user segregation scheme can be used in conjunction with most clustering algorithms including the proposed multi-factor scheme. A comprehensive simulation study demonstrates that with such novel user segregation, the performance of individual users, as well as the whole network, can be significantly improved for throughput, energy consumption, and fairness.
  • Item
    Towards implementing RSA-based CP-ABE algorithm on Android system : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Sciences at Massey University, Auckland, New Zealand
    (Massey University, 2019) Xing, Jiaxin
    Cipher-text-Policy Attribute-Based Encryption (CP-ABE) algorithm has been proposed to encrypt and decrypt data based on the matching between attributes and an access policy placed over cipher-text. Using CP-ABE, data owner can encrypt data along with an access policy to enforce a fine-grained access control. To improve the efficiency of performance, this study chose a RSA-based CP-ABE algorithm with an access-tree structure while most existing CP-ABE has been implemented using ECC. This new RSA-based CP-ABE algorithm was implemented in the Linux system in another study while this thesis addresses an implementation strategy on an Android system. To achieve this goal, a simple encryption application was designed for users who want to encrypt and decrypt messages through their mobile devices. This study used Android Studio to create the encryption application. In this cipher program, users input the message they want to encrypt and get the encrypted data through the function button named “CIPHER”, and they also can decrypt the cipher-text in the same way. There are four main algorithms involved in a CP-ABE scheme. They respectively are setup, key generation, encryption and decryption. During the setup process, the CP-ABE scheme uses the RSA algorithm to choose two prime numbers. These prime numbers are used to a master public key and a master private key. In the key generation algorithm, a secret key is generated for a set of attributes using the master private key. In the encryption step, it creates a cipher-text with an access tree. In the decryption algorithm, if and only if the attributes for the user’s decryption key satisfies this access policy is able to decode the encrypted data. This algorithm uses the construction of lightweight no-paring crypto-system based on RSA, and the construction supports an expressive monotone tree access structure to implement the complex access control as a more generic system. By using this algorithm, the encryption and decryption processes are more efficient and secure.
  • Item
    Research on adjacent matrix for K-means clustering : a thesis presented for the degree of Master of Computer Science in School of Natural and Computational Sciences at Massey University, Auckland, New Zealand
    (Massey University, 2019) Zhou, Jukai
    Machine learning is playing a vital role in our modern world. Depending on whether the data has labels or not, machine learning mainly contains three categories, i.e., unsupervised learning, supervised learning, and semi-supervised learning. As labels are usually difficult and expensive to be obtained, unsupervised learning is more popular, compared to supervised learning and semi-supervised learning. Moreover, k-means clustering is very popular in the domain of unsupervised learning. Hence, this thesis focuses on the improvement of previous k-means clustering. K-means clustering has been widely applied in real applications due to its linear time complexity and ease of implementation. However, kmeans clustering is limited to its applicability due to the issues, such as identification of the cluster number k, initialisation of centroids, as well as the definition of similarity measurements for evaluating the similarity between two data points. Hence, k-means clustering is still a hot research topic in unsupervised learning. In this thesis, we propose to improve traditional k-means clustering by designing two different similarity matrices to represent the original data points. The first method first constructs a new representation (i.e., an adjacent matrix) to replace the original representation of data points, and then runs k-means clustering on the resulted adjacent matrix. In this way, our proposed method benefits from the high-order similarity among data points to capture the complex structure inherent in data points as well as avoids the time-consuming process of eigenvectors decomposition in spectral clustering. The second method takes into account the weights of the features to improve the former method, based on the assumption that different features contain different contributions to the construction of the clustering models. As a result, it makes the clustering model more robust, compared to the first method as well as previous clustering methods. Finally, we tested our proposed clustering methods on public UCI datasets. Experimental results showed the clustering results of our proposed methods significantly outperformed the comparison methods in terms of three evaluation metrics.
  • Item
    Improved K-means clustering algorithms : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science, Massey University, New Zealand
    (Massey University, 2020) Liu, Tong
    K-means clustering algorithm is designed to divide the samples into subsets with the goal that maximizes the intra-subset similarity and inter-subset dissimilarity where the similarity measures the relationship between two samples. As an unsupervised learning technique, K-means clustering algorithm is considered one of the most used clustering algorithms and has been applied in a variety of areas such as artificial intelligence, data mining, biology, psychology, marketing, medicine, etc. K-means clustering algorithm is not robust and its clustering result depends on the initialization, the similarity measure, and the predefined cluster number. Previous research focused on solving a part of these issues but has not focused on solving them in a unified framework. However, fixing one of these issues does not guarantee the best performance. To improve K-means clustering algorithm, one of the most famous and widely used clustering algorithms, by solving its issues simultaneously is challenging and significant. This thesis conducts an extensive research on K-means clustering algorithm aiming to improve it. First, we propose the Initialization-Similarity (IS) clustering algorithm to solve the issues of the initialization and the similarity measure of K-means clustering algorithm in a unified way. Specifically, we propose to fix the initialization of the clustering by using sum-of-norms (SON) which outputs the new representation of the original samples and to learn the similarity matrix based on the data distribution. Furthermore, the derived new representation is used to conduct K-means clustering. Second, we propose a Joint Feature Selection with Dynamic Spectral (FSDS) clustering algorithm to solve the issues of the cluster number determination, the similarity measure, and the robustness of the clustering by selecting effective features and reducing the influence of outliers simultaneously. Specifically, we propose to learn the similarity matrix based on the data distribution as well as adding the ranked constraint on the Laplacian matrix of the learned similarity matrix to automatically output the cluster number. Furthermore, the proposed algorithm employs the L2,1-norm as the sparse constraints on the regularization term and the loss function to remove the redundant features and reduce the influence of outliers respectively. Third, we propose a Joint Robust Multi-view (JRM) spectral clustering algorithm that conducts clustering for multi-view data while solving the initialization issue, the cluster number determination, the similarity measure learning, the removal of the redundant features, and the reduction of outlier influence in a unified way. Finally, the proposed algorithms outperformed the state-of-the-art clustering algorithms on real data sets. Moreover, we theoretically prove the convergences of the proposed optimization methods for the proposed objective functions.
  • Item
    On the use of optimal search algorithms with artificial potential field for robot soccer navigation : Computer Science, Master of Science
    (Massey University, 2018) Dong, Chen
    The artificial potential field (APF) is a popular method of choice for robot navigation, as it offers an intuitive model clearly defining all attractive and repulsive forces acting on the robot [3] [25] [29] [43] [50]. However, there are drawbacks that limit the usage of this method. For instance, the local minima problem that gets a robot trapped, and the Goal-Non-Reachable-with-Obstacle-Nearby (GNRON) problem, as reported in [51] [5] [23] [2] and [3]. In order to avoid these limitations, this research focuses on devising a methodology of combining the artificial potential field with a selection of optimal search algorithms. This work investigates the performance of the method when using different optimal search algorithms such as the A* algorithm and the any-angle path-planning Theta* Search, in combination with different types of artifcial potential field generators. We also present a novel integration technique, whereby the Potential Field approach is utilized as an internal component of an optimal search algorithm, considering the safeness of the calculated paths. Furthermore, this study also explores the optimization of several auxiliary algorithms used in conjunction with the APF-Optimal search integration: There are three different methods proposed for implementing the line-of-sight (LOS) component of the Theta* search, namely the simple line-of-sight checking algorithm, the modified Bresenham's line algorithm and the modified Cohen-Sutherland algorithm. Contrary to the studies presented in [5], [42], [48] and [40] where the APF and the optimal search algorithms were used separately, in this research, an integrative methodology involving the APF inside the optimal search with a newly proposed Safety Factor (SF) is explored. Experiment results indicate that the APF-A* Search with the SF can reduce the number of state expansions and therefore also the running time up to 19.61%, while maintaining the safeness of the path, as compared to APF-A* when not using the SF. Furthermore, this research also explores how the proposed hybrid algorithms can be used in developing multi-objective behaviours of single robot. In this regard, a robot soccer simulation platform with a physics engine is developed as well to support the exploration. Lastly, the performance of the proposed algorithms is examined under varying environment conditions. Evidences are provided showing that the method can be used in constructing the intelligence for a robot goal keeper and a robot attacker (ball shooter). A multitude of AI robot behaviours using the proposed methods are integrated via a finite state machine including: defensive positioning/parking, ball kicking/shooting, and target pursuing behaviours. Keywords : Artificial Potential Field, Optimal Searches, Robot Navigation, Multi- objective Behaviours.
  • Item
    Algorithms and implementation of functional dependency discovery in XML : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Sciences in Information Systems at Massey University
    (Massey University, 2006) Zhou, Zheng
    1.1 Background Following the advent of the web, there has been a great demand for data interchange between applications using internet infrastructure. XML (extensible Markup Language) provides a structured representation of data empowered by broad adoption and easy deployment. As a subset of SGML (Standard Generalized Markup Language), XML has been standardized by the World Wide Web Consortium (W3C) [Bray et al., 2004], XML is becoming the prevalent data exchange format on the World Wide Web and increasingly significant in storing semi-structured data. After its initial release in 1996, it has evolved and been applied extensively in all fields where the exchange of structured documents in electronic form is required. As with the growing popularity of XML, the issue of functional dependency in XML has recently received well deserved attention. The driving force for the study of dependencies in XML is it is as crucial to XML schema design, as to relational database(RDB) design [Abiteboul et al., 1995].
  • Item
    Design of an FPGA-based smart camera and its application towards object tracking : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Electronics and Computer Engineering at Massey University, Manawatu, New Zealand
    (Massey University, 2016) Contreras, Miguel
    Smart cameras and hardware image processing are not new concepts, yet despite the fact both have existed several decades, not much literature has been presented on the design and development process of hardware based smart cameras. This thesis will examine and demonstrate the principles needed to develop a smart camera on hardware, based on the experiences from developing an FPGA-based smart camera. The smart camera is applied on a Terasic DE0 FPGA development board, using Terasic’s 5 megapixel GPIO camera. The algorithm operates at 120 frames per second at a resolution of 640x480 by utilising a modular streaming approach. Two case studies will be explored in order to demonstrate the development techniques established in this thesis. The first case study will develop the global vision system for a robot soccer implementation. The algorithm will identify and calculate the positions and orientations of each robot and the ball. Like many robot soccer implementations each robot has colour patches on top to identify each robot and aid finding its orientation. The ball is comprised of a single solid colour that is completely distinct from the colour patches. Due to the presence of uneven light levels a YUV-like colour space labelled YC1C2 is used in order to make the colour values more light invariant. The colours are then classified using a connected components algorithm to segment the colour patches. The shapes of the classified patches are then used to identify the individual robots, and a CORDIC function is used to calculate the orientation. The second case study will investigate an improved colour segmentation design. A new HSY colour space is developed by remapping the Cartesian coordinate system from the YC1C2 to a polar coordinate system. This provides improved colour segmentation results by allowing for variations in colour value caused by uneven light patterns and changing light levels.
  • Item
    Adapting ACME to the database caching environment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Systems at Massey University
    (Massey University, 2003) Riaz-ud-Din, Faizal
    The field of database cache replacement has seen a great many replacement policies presented in the past few years. As the challenge to find the optimal replacement policy continues, new methods and techniques of determining cache victims have been proposed, with some methods having a greater effect on results than others. Adaptive algorithms attempt to adapt to changing patterns of data access by combining the benefits of other existing algorithms. Such adaptive algorithms have recently been proposed in the web-caching environment. However, there is a lack of such research in the area of database caching. This thesis investigates an attempt to adapt a recently proposed adaptive caching algorithm in the area of web-caching, known as Adaptive Caching with Multiple Experts (ACME), to the database environment. Recently proposed replacement policies are integrated into ACME'S existing policy pool, in an attempt to gauge its ability and robustness to readily incorporate new algorithms. The results suggest that ACME is indeed well-suited to the database environment, and performs as well as the best currently caching policy within its policy pool at any particular point in time in its request stream. Although execution time increases by integrating more policies into ACME, the overall time saved increases by avoiding disk reads due to higher hit rates and fewer misses on the cache.
  • Item
    Novel digital VLSI implementation of data encryption algorithm using nano-metric CMOS technology : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Auckland, New Zealand
    (Massey University, 2013) Ahmad, Nabihah Nornabihah
    Implementations of the Advanced Encryption Standard (AES) have rapidly grown in various applications including telecommunications, finance and networks that require a low power consumption and low cost design. Presented in this thesis is a new 8-bit stream cipher architecture core for an application specific integrated circuit AES crypto-processor. The chip area and power are optimised along with high throughput by employing circuit-level techniques, resource sharing and low supply voltage. The proposed design includes a novel S-box/ InvS-box, MixColumn/ InvMixColumn and ShiftRow/ InvShiftRow with a novel low power Exclusive OR (XOR) gate applied to all sub systems to minimise the power consumption. It is implemented in a 130nm CMOS process and supports both encryption and decryption in Electronic Codebook Mode (EBC) using 128-bit keys with a throughput of 0.05Gbit/s (at 100MHz clock). This design utilises 3152 gate equivalents, including an on-the-fly key scheduling unit along with 4.23μW/MHz power consumption. The area of the chip is 640μm×325μm (0.208 square mm), excluding the bonding pads. Compared to other 8-bit implementations, the proposed design achieves a smaller chip size along with higher throughput and lower power dissipation. This thesis also describes a new fault detection scheme for S-box/ InvS-box that is parity prediction based to protect the key from fault attacks.