Industrial Internet of Things (IIoT) technologies have been increasingly leveraged across various industry sectors, due to their benefits in terms of automation, monitoring, and operational efficiency. However, the increased connectivity and heterogeneity of IIoT devices have also broadened the attack surface, making these systems attractive targets for cyber threats. In this context, machine learning–based Intrusion Detection Systems (IDS) have emerged as promising solutions due to their ability to detect complex patterns in network traffic without relying on static rules or deep packet inspection. A key limitation of such systems, however, lies in their lack of interpretability, posing challenges for adoption in safety-critical industrial settings.In this work, we propose an explainable IDS that leverages a Random Forest classifier for accurate traffic classification and integrates SHAP (SHapley Additive Explanations) to provide transparent explanations of model decisions. We evaluate our system using the CIC IoT-DIAD 2024 dataset, which includes a broad spectrum of network attacks. Our approach demonstrates good detection performance while also delivering intuitive explanations for each prediction. By analyzing the specific network features, such as inter-arrival times and packet sizes, that most influence each alert, security analysts may better assess, validate, and act upon IDS outputs.

SHAP happens: an explainable IDS for industrial IoT networks

Loi, Pierangelo
Primo
;
Regano, Leonardo
;
Maiorca, Davide
;
Giacinto, Giorgio
2025-01-01

Abstract

Industrial Internet of Things (IIoT) technologies have been increasingly leveraged across various industry sectors, due to their benefits in terms of automation, monitoring, and operational efficiency. However, the increased connectivity and heterogeneity of IIoT devices have also broadened the attack surface, making these systems attractive targets for cyber threats. In this context, machine learning–based Intrusion Detection Systems (IDS) have emerged as promising solutions due to their ability to detect complex patterns in network traffic without relying on static rules or deep packet inspection. A key limitation of such systems, however, lies in their lack of interpretability, posing challenges for adoption in safety-critical industrial settings.In this work, we propose an explainable IDS that leverages a Random Forest classifier for accurate traffic classification and integrates SHAP (SHapley Additive Explanations) to provide transparent explanations of model decisions. We evaluate our system using the CIC IoT-DIAD 2024 dataset, which includes a broad spectrum of network attacks. Our approach demonstrates good detection performance while also delivering intuitive explanations for each prediction. By analyzing the specific network features, such as inter-arrival times and packet sizes, that most influence each alert, security analysts may better assess, validate, and act upon IDS outputs.
2025
979-8-3315-9789-4
Internet of Things; Intrusion detection; Explain able AI
File in questo prodotto:
File Dimensione Formato  
SHAP_happens_an_Explainable_IDS_for_Industrial_IoT_Networks.pdf

Solo gestori archivio

Descrizione: VoR
Tipologia: versione editoriale (VoR)
Dimensione 1.11 MB
Formato Adobe PDF
1.11 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Explainable_IDS-3_Iris.pdf

accesso aperto

Descrizione: AAM
Tipologia: versione post-print (AAM)
Dimensione 977.32 kB
Formato Adobe PDF
977.32 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/477505
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact