Recently, neural networks (NNs) have been proposed for the detection of cyber attacks targeting industrial control systems (ICSs). Such detectors are often retrained, using data collected during system operation, to cope with the evolution of the monitored signals over time. However, by exploiting this mechanism, an attacker can fake the signals provided by corrupted sensors at training time and poison the learning process of the detector to allow cyber attacks to stay undetected at test time. Previous work explored the ability to generate adversarial samples that fool anomaly detection models in ICSs but without compromising their training process. With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online detectors based on neural networks. We propose two distinct attack algorithms, namely, interpolation- and back-gradient-based poisoning, and demonstrate their effectiveness. The evaluation is conducted on diverse data sources: synthetic data, real-world ICS testbed data, and a simulation of the Tennessee Eastman process. This first practical evaluation of poisoning attacks using a simulation tool highlights the challenges of poisoning dynamically controlled systems. The generality of the proposed methods under different NN parameters and architectures is studied. Lastly, we propose and analyze some potential mitigation strategies.

Practical Evaluation of Poisoning Attacks on Online Anomaly Detectors in Industrial Control Systems

Demetrio L.;Biggio B.;
2022-01-01

Abstract

Recently, neural networks (NNs) have been proposed for the detection of cyber attacks targeting industrial control systems (ICSs). Such detectors are often retrained, using data collected during system operation, to cope with the evolution of the monitored signals over time. However, by exploiting this mechanism, an attacker can fake the signals provided by corrupted sensors at training time and poison the learning process of the detector to allow cyber attacks to stay undetected at test time. Previous work explored the ability to generate adversarial samples that fool anomaly detection models in ICSs but without compromising their training process. With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online detectors based on neural networks. We propose two distinct attack algorithms, namely, interpolation- and back-gradient-based poisoning, and demonstrate their effectiveness. The evaluation is conducted on diverse data sources: synthetic data, real-world ICS testbed data, and a simulation of the Tennessee Eastman process. This first practical evaluation of poisoning attacks using a simulation tool highlights the challenges of poisoning dynamically controlled systems. The generality of the proposed methods under different NN parameters and architectures is studied. Lastly, we propose and analyze some potential mitigation strategies.
2022
Anomaly detection; Industrial control systems; Autoencoders; Adversarial machine learning; Poisoning attacks; Adversarial robustness
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0167404822002942-main.pdf

Solo gestori archivio

Tipologia: versione editoriale (VoR)
Dimensione 3.19 MB
Formato Adobe PDF
3.19 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Poisoning_Cyber_Physical_Attack_Detectors___Extended.pdf

accesso aperto

Tipologia: versione pre-print
Dimensione 1.71 MB
Formato Adobe PDF
1.71 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/345358
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 4
social impact