Nowadays, mobile devices are massively used in everyday activities. Thus, they contain sensitive data targeted by threat actors like bank accounts and personal information. Through the years, Machine Learning approaches have been proposed to identify malicious Android applications, but recent research highlights the need for better explanations for model decisions, as existing ones may not be related to the app’s malicious functionalities. This paper proposes an explainable approach based on static analysis to detect Android malware. The novelty lies in the specific analysis conducted to select and extract the features (i.e., APIs taken from the DEX Call Graph) that immediately provide meaningful explanations of the model functionality, thus allowing a significant correlation of the malware behavior with its family. Moreover, since we contain the number and type of features, the distinct impacts of each one appear more evident. The attained results show that it is possible to reach comparable results (in terms of accuracy) to existing state-of-the-art models while providing easy-to-understand explanations, which may yield significant insights into the malicious functionalities of the samples.

Enhancing android malware detection explainability through function call graph APIs

Soi, Diego
;
Sanna, Alessandro;Maiorca, Davide;Giacinto, Giorgio
2024-01-01

Abstract

Nowadays, mobile devices are massively used in everyday activities. Thus, they contain sensitive data targeted by threat actors like bank accounts and personal information. Through the years, Machine Learning approaches have been proposed to identify malicious Android applications, but recent research highlights the need for better explanations for model decisions, as existing ones may not be related to the app’s malicious functionalities. This paper proposes an explainable approach based on static analysis to detect Android malware. The novelty lies in the specific analysis conducted to select and extract the features (i.e., APIs taken from the DEX Call Graph) that immediately provide meaningful explanations of the model functionality, thus allowing a significant correlation of the malware behavior with its family. Moreover, since we contain the number and type of features, the distinct impacts of each one appear more evident. The attained results show that it is possible to reach comparable results (in terms of accuracy) to existing state-of-the-art models while providing easy-to-understand explanations, which may yield significant insights into the malicious functionalities of the samples.
2024
Malware analysis, Deep learning, Explainability, Android
File in questo prodotto:
File Dimensione Formato  
Enhancing android malware detection explainability through function call graph APIs_2024.pdf

accesso aperto

Descrizione: articolo online
Tipologia: versione editoriale
Dimensione 1.65 MB
Formato Adobe PDF
1.65 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/388625
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact