Text-to-Image retrieval (IR) systems are widely used to match images to specific textual queries, often leveraging publicly available Vision-Language Pretrained models (VLPs) for their generalization capabilities. However, due to the diverse and open nature of the image data they rely on, these systems remain vulnerable to data poisoning attacks, where malicious images are injected into the database to manipulate retrieval results. Prior work has demonstrated the effectiveness of attacks when the exact user query is known at retrieval time. However, this assumption is often impractical, as users tend to express similar intents using varied, semantically equivalent queries (e.g., through synonyms), which reduces the effectiveness of existing attacks. In this paper, we address this gap by proposing an attack that remains effective even when users issue semantically varied queries. We introduce Collisio, a novel poisoning method that crafts a single poisoned image to be retrieved under any semantically equivalent form of a target query. To achieve this, Collisio leverages an Expectation over Queries (EoQ) strategy, generating a diverse set of synthetic and selectively transformed query variants, and then optimizes the poisoned image to align with them. We extensively evaluate Collisio on the Flickr30k and MSCOCO datasets across multiple VLPs, demonstrating the severity of Collisio under realistic query variations. Given the implications of this vulnerability, we examine countermeasures based on adversarially trained models and a data preprocessing defense, highlighting both their mitigation potential and the trade-offs involved.

Poison once, fool many: practical poisoning attacks against text-to-image retrieval systems

Mura, Raffaele;Biggio, Battista;Roli, Fabio
2026-01-01

Abstract

Text-to-Image retrieval (IR) systems are widely used to match images to specific textual queries, often leveraging publicly available Vision-Language Pretrained models (VLPs) for their generalization capabilities. However, due to the diverse and open nature of the image data they rely on, these systems remain vulnerable to data poisoning attacks, where malicious images are injected into the database to manipulate retrieval results. Prior work has demonstrated the effectiveness of attacks when the exact user query is known at retrieval time. However, this assumption is often impractical, as users tend to express similar intents using varied, semantically equivalent queries (e.g., through synonyms), which reduces the effectiveness of existing attacks. In this paper, we address this gap by proposing an attack that remains effective even when users issue semantically varied queries. We introduce Collisio, a novel poisoning method that crafts a single poisoned image to be retrieved under any semantically equivalent form of a target query. To achieve this, Collisio leverages an Expectation over Queries (EoQ) strategy, generating a diverse set of synthetic and selectively transformed query variants, and then optimizes the poisoned image to align with them. We extensively evaluate Collisio on the Flickr30k and MSCOCO datasets across multiple VLPs, demonstrating the severity of Collisio under realistic query variations. Given the implications of this vulnerability, we examine countermeasures based on adversarially trained models and a data preprocessing defense, highlighting both their mitigation potential and the trade-offs involved.
2026
Machine learning security; Data poisoning; Text-to-image retrieval; Vision-language models; Expectation over queries; Robustness
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0950705125021288-main_compressed.pdf

accesso aperto

Descrizione: articolo online
Tipologia: versione editoriale (VoR)
Dimensione 1.45 MB
Formato Adobe PDF
1.45 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/465706
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact