In learning problems, the noise inherent to the task at hand hinders the possibility to infer without a certain degree of uncertainty. Quantifying this uncertainty, regardless of its wide use, assumes high relevance for security-sensitive applications. Within these scenarios, it becomes fundamental to guarantee good (i.e., trustworthy) uncertainty measures, which downstream modules can securely employ to drive the final decision-making process. However, an attacker may be interested in forcing the system to produce either (i) highly uncertain outputs jeopardizing the system’s availability or (ii) low uncertainty estimates, making the system accept uncertain samples that would instead require a careful inspection (e.g., human intervention). Therefore, it becomes fundamental to understand how to obtain robust uncertainty estimates against these kinds of attacks. In this work, we reveal both empirically and theoretically that defending against adversarial examples, i.e., carefully perturbed samples that cause misclassification, additionally guarantees a more secure, trustworthy uncertainty estimate under common attack scenarios without the need for an ad-hoc defense strategy. To support our claims, we evaluate multiple adversarial-robust classification models from the publicly available benchmark RobustBench on the CIFAR-10 and ImageNet datasets, and on a robust semantic segmentation model evaluated on Pascal-VOC. The code for the reproducibility of the experiments is available at the following link:https://github.com/pralab/UncertaintyAdversarialRobustness.

On the robustness of adversarial training against uncertainty attacks

Ledda, Emanuele
Primo
;
Angioni, Daniele;Piras, Giorgio;Cinà, Antonio Emanuele;Fumera, Giorgio;Biggio, Battista;Roli, Fabio
Ultimo
2026-01-01

Abstract

In learning problems, the noise inherent to the task at hand hinders the possibility to infer without a certain degree of uncertainty. Quantifying this uncertainty, regardless of its wide use, assumes high relevance for security-sensitive applications. Within these scenarios, it becomes fundamental to guarantee good (i.e., trustworthy) uncertainty measures, which downstream modules can securely employ to drive the final decision-making process. However, an attacker may be interested in forcing the system to produce either (i) highly uncertain outputs jeopardizing the system’s availability or (ii) low uncertainty estimates, making the system accept uncertain samples that would instead require a careful inspection (e.g., human intervention). Therefore, it becomes fundamental to understand how to obtain robust uncertainty estimates against these kinds of attacks. In this work, we reveal both empirically and theoretically that defending against adversarial examples, i.e., carefully perturbed samples that cause misclassification, additionally guarantees a more secure, trustworthy uncertainty estimate under common attack scenarios without the need for an ad-hoc defense strategy. To support our claims, we evaluate multiple adversarial-robust classification models from the publicly available benchmark RobustBench on the CIFAR-10 and ImageNet datasets, and on a robust semantic segmentation model evaluated on Pascal-VOC. The code for the reproducibility of the experiments is available at the following link:https://github.com/pralab/UncertaintyAdversarialRobustness.
2026
Uncertainty quantification; Adversarial machine learning; Neural networks
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0031320325011823-main.pdf

accesso aperto

Tipologia: versione editoriale (VoR)
Dimensione 7.16 MB
Formato Adobe PDF
7.16 MB Adobe PDF Visualizza/Apri
2410.21952v2.pdf

Solo gestori archivio

Tipologia: versione pre-print
Dimensione 973.15 kB
Formato Adobe PDF
973.15 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/459587
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact