[HTML][HTML] Adversarial attacks hidden in plain sight

JP Göpfert, A Artelt, H Wersing, B Hammer - Advances in Intelligent Data …, 2020 - Springer
Advances in Intelligent Data Analysis XVIII: 18th International Symposium on …, 2020Springer
Convolutional neural networks have been used to achieve a string of successes during
recent years, but their lack of interpretability remains a serious issue. Adversarial examples
are designed to deliberately fool neural networks into making any desired incorrect
classification, potentially with very high certainty. Several defensive approaches increase
robustness against adversarial attacks, demanding attacks of greater magnitude, which lead
to visible artifacts. By considering human visual perception, we compose a technique that …
Abstract
Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. Several defensive approaches increase robustness against adversarial attacks, demanding attacks of greater magnitude, which lead to visible artifacts. By considering human visual perception, we compose a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer. We carry out a user study on classifying adversarially modified images to validate the perceptual quality of our approach and find significant evidence for its concealment with regards to human visual perception.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果