Arrowsmith JSusnjak TJang-Jaccard JBuccafurri F2025-04-022025-04-022025-02-28Arrowsmith J, Susnjak T, Jang-Jaccard J. (2025). Multimodal Deep Learning for Android Malware Classification. Machine Learning and Knowledge Extraction. 7. 1.https://mro.massey.ac.nz/handle/10179/72724This study investigates the integration of diverse data modalities within deep learning ensembles for Android malware classification. Android applications can be represented as binary images and function call graphs, each offering complementary perspectives on the executable. We synthesise these modalities by combining predictions from convolutional and graph neural networks with a multilayer perceptron. Empirical results demonstrate that multimodal models outperform their unimodal counterparts while remaining highly efficient. For instance, integrating a plain CNN with 83.1% accuracy and a GCN with 80.6% accuracy boosts overall accuracy to 88.3%. DenseNet-GIN achieves 90.6% accuracy, with no further improvement obtained by expanding this ensemble to four models. Based on our findings, we advocate for the flexible development of modalities to capture distinct aspects of applications and for the design of algorithms that effectively integrate this information.(c) The author/shttps://creativecommons.org/licenses/by/4.0/multimodal deep learning for Android malware detectionenhanced malware analysisgraph neural networksfunction call graphs (FCG)efficient multimodal late fusionCNN GNN Ensemblebytecode image analysisAndroid APK analysisdata fusionMultimodal Deep Learning for Android Malware ClassificationJournal article10.3390/make70100232504-4990CC BYjournal-article23