Evaluating the adversarial robustness of machine-learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss function, the optimizer, and the step-size scheduler, along with the corresponding hyperparameters. Our extensive evaluation involving several robust models demonstrates the improved efficacy of fast minimum-norm attacks when hyped up with hyperparameter optimization. We release our open-source code at https://github.com/pralab/HO-FMN.
Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Giuseppe FlorisPrimo
;Raffaele Mura;Luca Scionis;Giorgio Piras
;Maura Pintor;Ambra DemontisPenultimo
;Battista BiggioUltimo
2023-01-01
Abstract
Evaluating the adversarial robustness of machine-learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss function, the optimizer, and the step-size scheduler, along with the corresponding hyperparameters. Our extensive evaluation involving several robust models demonstrates the improved efficacy of fast minimum-norm attacks when hyped up with hyperparameter optimization. We release our open-source code at https://github.com/pralab/HO-FMN.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
ES2023-164 (1).pdf
Solo gestori archivio
Tipologia:
versione editoriale (VoR)
Dimensione
1.69 MB
Formato
Adobe PDF
|
1.69 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
2310.08177.pdf
accesso aperto
Tipologia:
versione pre-print
Dimensione
443.53 kB
Formato
Adobe PDF
|
443.53 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.