Capsattacks: Robust and imperceptible adversarial attacks on capsule networks

A Marchisio, G Nanfa, F Khalid, MA Hanif… - arXiv preprint arXiv …, 2019 - arxiv.org
arXiv preprint arXiv:1901.09878, 2019arxiv.org
Capsule Networks preserve the hierarchical spatial relationships between objects, and
thereby bears a potential to surpass the performance of traditional Convolutional Neural
Networks (CNNs) in performing tasks like image classification. A large body of work has
explored adversarial examples for CNNs, but their effectiveness on Capsule Networks has
not yet been well studied. In our work, we perform an analysis to study the vulnerabilities in
Capsule Networks to adversarial attacks. These perturbations, added to the test inputs, are …
Capsule Networks preserve the hierarchical spatial relationships between objects, and thereby bears a potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification. A large body of work has explored adversarial examples for CNNs, but their effectiveness on Capsule Networks has not yet been well studied. In our work, we perform an analysis to study the vulnerabilities in Capsule Networks to adversarial attacks. These perturbations, added to the test inputs, are small and imperceptible to humans, but can fool the network to mispredict. We propose a greedy algorithm to automatically generate targeted imperceptible adversarial examples in a black-box attack scenario. We show that this kind of attacks, when applied to the German Traffic Sign Recognition Benchmark (GTSRB), mislead Capsule Networks. Moreover, we apply the same kind of adversarial attacks to a 5-layer CNN and a 9-layer CNN, and analyze the outcome, compared to the Capsule Networks to study differences in their behavior.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果