The concept of mobility is experiencing a serious transformation due to the Mobility-as-a-Service paradigm. Accordingly, vehicles, usually referred to as smart, are seeing their architecture revamped to integrate connection to the outside environment (V2X) and autonomous driving. A significant part of these innovations is enabled by machine learning. However, deploying such systems raises some concerns. First, the complexity of the algorithms often prevents understanding what these models learn, which is relevant in the safety-critical context of mobility. Second, several studies have demonstrated the vulnerability of machine learning-based algorithms to adversarial attacks. For these reasons, research on the explainability of machine learning is raising. In this paper, we then explore the role of interpretable machine learning in the ecosystem of smart vehicles, with the goal of figuring out if and in what terms explanations help to design secure vehicles. We provide an overview of the potential uses of explainable machine learning, along with recent work in the literature that has started to investigate the topic, including from the perspectives of human-agent systems and cyber-physical systems. Our analysis highlights both benefits and criticalities in employing explanations.

On the Role of Explainable Machine Learning for Secure Smart Vehicles

Michele Scalas
Primo
;
Giorgio Giacinto
Ultimo
2020-01-01

Abstract

The concept of mobility is experiencing a serious transformation due to the Mobility-as-a-Service paradigm. Accordingly, vehicles, usually referred to as smart, are seeing their architecture revamped to integrate connection to the outside environment (V2X) and autonomous driving. A significant part of these innovations is enabled by machine learning. However, deploying such systems raises some concerns. First, the complexity of the algorithms often prevents understanding what these models learn, which is relevant in the safety-critical context of mobility. Second, several studies have demonstrated the vulnerability of machine learning-based algorithms to adversarial attacks. For these reasons, research on the explainability of machine learning is raising. In this paper, we then explore the role of interpretable machine learning in the ecosystem of smart vehicles, with the goal of figuring out if and in what terms explanations help to design secure vehicles. We provide an overview of the potential uses of explainable machine learning, along with recent work in the literature that has started to investigate the topic, including from the perspectives of human-agent systems and cyber-physical systems. Our analysis highlights both benefits and criticalities in employing explanations.
2020
978-8-8872-3749-8
Machine learning
Automobiles
Autonomous vehicles
Automotive engineering
Hardware
Standards
Security
Explainability
Cybersecurity
Machine learning
Mobility
Smart Vehicles
Automotive
Connected Cars
Autonomous Driving
File in questo prodotto:
File Dimensione Formato  
AEIT_Automotive_Cybersecurity-2.pdf

Solo gestori archivio

Tipologia: versione pre-print
Dimensione 937.65 kB
Formato Adobe PDF
937.65 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/304835
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 4
social impact