作者
Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
发表日期
2019/7/1
研讨会论文
2019 IEEE 25th International symposium on on-line testing and robust system design (IOLTS)
页码范围
188-193
出版商
IEEE
简介
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data …
引用总数
201920202021202220232024486552
学术搜索中的文章
F Khalid, MA Hanif, S Rehman, R Ahmed, M Shafique - 2019 IEEE 25th International symposium on on-line …, 2019