作者
Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique
发表日期
2020/7/19
研讨会论文
2020 International Joint Conference on Neural Networks (IJCNN)
页码范围
1-8
出版商
IEEE
简介
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN …
引用总数
20192020202120222023202418131185
学术搜索中的文章
A Marchisio, G Nanfa, F Khalid, MA Hanif, M Martina… - 2020 International Joint Conference on Neural …, 2020
A Marchisio, G Nanfa, F Khalid, MA Hanif, M Martina… - arXiv preprint arXiv:1902.01147, 2019