Adversarial attack methods involve making small but strategically crafted modifications to an image to mislead the model’s automatic classifier. Many existing adversarial attack methods introduce unnatural alterations [15, 29], if such a patch is included in a document, this may make the document look suspicious. In contrast, this paper investigates a more natural and inconspicuous approach using stamp-like adversarial patches that resemble real-world document elements while effectively disrupting classification accuracy. To systematically evaluate the effectiveness of these adversarial stamps, we conduct extensive experiments on the RVL-CDIP dataset, a widely used benchmark for document classification. We analyze the impact of various patch attributes, including color, size, shape, and most importantly, position, on the attack success rate. Our study highlights that placement plays a crucial role in maximizing the attack’s effectiveness, as different locations on the document lead to classifier degradation. To optimize both the adversarial patch and its position, we introduce an iterative training pipeline that dynamically optimizes the most disruptive locations in a document. Our results show that stamp-like adversarial patches can effectively attack document classifiers, revealing their vulnerabilities. Well-placed stamps further degrade classification accuracy, highlighting the impact of positional optimization. These findings emphasize the importance of position-aware adversarial attacks and provide insights for optimizing their design. We will make our code publicly available upon acceptance.

Position-Aware Stamp-Like Adversarial Attack for Document Classification

Pintor, Maura;
2025-01-01

Abstract

Adversarial attack methods involve making small but strategically crafted modifications to an image to mislead the model’s automatic classifier. Many existing adversarial attack methods introduce unnatural alterations [15, 29], if such a patch is included in a document, this may make the document look suspicious. In contrast, this paper investigates a more natural and inconspicuous approach using stamp-like adversarial patches that resemble real-world document elements while effectively disrupting classification accuracy. To systematically evaluate the effectiveness of these adversarial stamps, we conduct extensive experiments on the RVL-CDIP dataset, a widely used benchmark for document classification. We analyze the impact of various patch attributes, including color, size, shape, and most importantly, position, on the attack success rate. Our study highlights that placement plays a crucial role in maximizing the attack’s effectiveness, as different locations on the document lead to classifier degradation. To optimize both the adversarial patch and its position, we introduce an iterative training pipeline that dynamically optimizes the most disruptive locations in a document. Our results show that stamp-like adversarial patches can effectively attack document classifiers, revealing their vulnerabilities. Well-placed stamps further degrade classification accuracy, highlighting the impact of positional optimization. These findings emphasize the importance of position-aware adversarial attacks and provide insights for optimizing their design. We will make our code publicly available upon acceptance.
2025
9783032046260
9783032046277
File in questo prodotto:
File Dimensione Formato  
978-3-032-04627-7.pdf

Solo gestori archivio

Tipologia: versione editoriale (VoR)
Dimensione 6.16 MB
Formato Adobe PDF
6.16 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
ICDAR_2025___Position_Optimized_Adversarial_Stamps_for_Document_Classification.pdf

embargo fino al 16/09/2026

Tipologia: versione post-print (AAM)
Dimensione 5.06 MB
Formato Adobe PDF
5.06 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/454365
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact