Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary's data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection.
Adversarial Feature Selection Against Evasion Attacks / Zhang F; Chan PPK; Biggio B; Yeung DS; Roli F. - 46:3(2016), pp. 766-777.
|Titolo:||Adversarial Feature Selection Against Evasion Attacks|
|Data di pubblicazione:||2016|
|Citazione:||Adversarial Feature Selection Against Evasion Attacks / Zhang F; Chan PPK; Biggio B; Yeung DS; Roli F. - 46:3(2016), pp. 766-777.|
|Tipologia:||1.1 Articolo in rivista|
File in questo prodotto:
|Adversarial Feature Selection Against Evasion Attacks.pdf||versione editoriale||Administrator Richiedi una copia|