Massey Documents by Type

Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Research on adjacent matrix for K-means clustering : a thesis presented for the degree of Master of Computer Science in School of Natural and Computational Sciences at Massey University, Auckland, New Zealand
    (Massey University, 2019) Zhou, Jukai
    Machine learning is playing a vital role in our modern world. Depending on whether the data has labels or not, machine learning mainly contains three categories, i.e., unsupervised learning, supervised learning, and semi-supervised learning. As labels are usually difficult and expensive to be obtained, unsupervised learning is more popular, compared to supervised learning and semi-supervised learning. Moreover, k-means clustering is very popular in the domain of unsupervised learning. Hence, this thesis focuses on the improvement of previous k-means clustering. K-means clustering has been widely applied in real applications due to its linear time complexity and ease of implementation. However, kmeans clustering is limited to its applicability due to the issues, such as identification of the cluster number k, initialisation of centroids, as well as the definition of similarity measurements for evaluating the similarity between two data points. Hence, k-means clustering is still a hot research topic in unsupervised learning. In this thesis, we propose to improve traditional k-means clustering by designing two different similarity matrices to represent the original data points. The first method first constructs a new representation (i.e., an adjacent matrix) to replace the original representation of data points, and then runs k-means clustering on the resulted adjacent matrix. In this way, our proposed method benefits from the high-order similarity among data points to capture the complex structure inherent in data points as well as avoids the time-consuming process of eigenvectors decomposition in spectral clustering. The second method takes into account the weights of the features to improve the former method, based on the assumption that different features contain different contributions to the construction of the clustering models. As a result, it makes the clustering model more robust, compared to the first method as well as previous clustering methods. Finally, we tested our proposed clustering methods on public UCI datasets. Experimental results showed the clustering results of our proposed methods significantly outperformed the comparison methods in terms of three evaluation metrics.
  • Item
    Initialization-similarity clustering algorithm
    (Springer Science+Business Media, LLC, 2019-12) Liu T; Zhu J; Zhou J; Zhu Y; Zhu X
    Classic k-means clustering algorithm randomly selects centroids for initialization to possibly output unstable clustering results. Moreover, random initialization makes the clustering result hard to reproduce. Spectral clustering algorithm is a two-step strategy, which first generates a similarity matrix and then conducts eigenvalue decomposition on the Laplacian matrix of the similarity matrix to obtain the spectral representation. However, the goal of the first step in the spectral clustering algorithm does not guarantee the best clustering result. To address the above issues, this paper proposes an Initialization-Similarity (IS) algorithm which learns the similarity matrix and the new representation in a unified way and fixes initialization using the sum-of-norms regularization to make the clustering more robust. The experimental results on ten real-world benchmark datasets demonstrate that our IS clustering algorithm outperforms the comparison clustering algorithms in terms of three evaluation metrics for clustering algorithm including accuracy (ACC), normalized mutual information (NMI), and Purity.
  • Item
    Weighted adjacent matrix for K-means clustering
    (Springer Science+Business Media, LLC, 2019-12) Zhou J; Liu T; Zhu J
    K-means clustering is one of the most popular clustering algorithms and has been embedded in other clustering algorithms, e.g. the last step of spectral clustering. In this paper, we propose two techniques to improve previous k-means clustering algorithm by designing two different adjacent matrices. Extensive experiments on public UCI datasets showed the clustering results of our proposed algorithms significantly outperform three classical clustering algorithms in terms of different evaluation metrics.