Browsing by Author "Zhu Y"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemAccounting students’ online engagement, choice of course delivery format and their effects on academic performance(Taylor and Francis Group, 2023-09-24) Hu Y; Nath N; Zhu Y; Laswad FThis study examines the effects of synchronous and non-synchronous online engagement on the academic performance of accounting students at a New Zealand university based on their choice of course delivery format – either distance learning or face-to-face learning with online components (F2F+). We track accounting students as they complete three financial accounting courses over three consecutive years. Drawing on social constructivism theory, we find that both synchronous and non-synchronous student online engagement are positively related to their academic performance, and this positive effect varies across assessment types. The positive effect of synchronous online engagement on student performance is more pronounced when students choose to learn via F2F+ rather than via distance learning. Further analyses show that the positive effect persists among students with different characteristics. These findings highlight the useful role of student online engagement in learning and provide support for universities to allow students to choose their preferred course delivery format.
- ItemCompleted sample correlations and feature dependency-based unsupervised feature selection(Springer Science+Business Media, LLC, 2023-04) Liu T; Hu R; Zhu YSample correlations and feature relations are two pieces of information that are needed to be considered in the unsupervised feature selection, as labels are missing to guide model construction. Thus, we design a novel unsupervised feature selection scheme, in this paper, via considering the completed sample correlations and feature dependencies in a unified framework. Specifically, self-representation dependencies and graph construction are conducted to preserve and select the important neighbors for each sample in a comprehensive way. Besides, mutual information and sparse learning are designed to consider the correlations between features and to remove the informative features, respectively. Moreover, various constraints are constructed to automatically obtain the number of important neighbors and to conduct graph partition for the clustering task. Finally, we test the proposed method and verify the effectiveness and the robustness on eight data sets, comparing with nine state-of-the-art approaches with regard to three evaluation metrics for the clustering task.
- ItemInitialization-similarity clustering algorithm(Springer Science+Business Media, LLC, 2019-12) Liu T; Zhu J; Zhou J; Zhu Y; Zhu XClassic k-means clustering algorithm randomly selects centroids for initialization to possibly output unstable clustering results. Moreover, random initialization makes the clustering result hard to reproduce. Spectral clustering algorithm is a two-step strategy, which first generates a similarity matrix and then conducts eigenvalue decomposition on the Laplacian matrix of the similarity matrix to obtain the spectral representation. However, the goal of the first step in the spectral clustering algorithm does not guarantee the best clustering result. To address the above issues, this paper proposes an Initialization-Similarity (IS) algorithm which learns the similarity matrix and the new representation in a unified way and fixes initialization using the sum-of-norms regularization to make the clustering more robust. The experimental results on ten real-world benchmark datasets demonstrate that our IS clustering algorithm outperforms the comparison clustering algorithms in terms of three evaluation metrics for clustering algorithm including accuracy (ACC), normalized mutual information (NMI), and Purity.