作者
Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, Ian Molloy
发表日期
2023
研讨会论文
32nd USENIX Security Symposium (USENIX Security 23)
页码范围
3817-3833
简介
Machine learning models are known to be vulnerable to adversarial evasion attacks as illustrated by image classification models. Thoroughly understanding such attacks is critical in order to ensure the safety and robustness of critical AI tasks. However, most evasion attacks are difficult to deploy against a majority of AI systems because they have focused on image domain with only few constraints. An image is composed of homogeneous, numerical, continuous, and independent features, unlike many other input types to AI systems used in practice. Furthermore, some input types include additional semantic and functional constraints that must be observed to generate realistic adversarial inputs. In this work, we propose a new framework to enable the generation of adversarial inputs irrespective of the input type and task domain. Given an input and a set of pre-defined input transformations, our framework discovers a sequence of transformations that result in a semantically correct and functional adversarial input. We demonstrate the generality of our approach on several diverse machine learning tasks with various input representations. We also show the importance of generating adversarial examples as they enable the deployment of mitigation techniques.
引用总数
学术搜索中的文章
K Eykholt, T Lee, D Schales, J Jang, I Molloy - 32nd USENIX Security Symposium (USENIX Security …, 2023