Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks

F Khalid, H Ali, H Tariq, MA Hanif… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
2019 IEEE 25th International Symposium on On-Line Testing and …, 2019ieeexplore.ieee.org
Adversarial examples have emerged as a significant threat to machine learning algorithms,
especially to the convolutional neural networks (CNNs). In this paper, we propose two
quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable
Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ
quantizes input pixel intensities based on a “fixed” number of quantization levels, while in
TQ, the quantization levels are “iteratively learned during the training phase”, thereby …
Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a “fixed” number of quantization levels, while in TQ, the quantization levels are “iteratively learned during the training phase”, thereby providing a stronger defense mechanism. We apply the proposed techniques on undefended CNNs against different state-of-the-art adversarial attacks from the open-source Cleverhans library. The experimental results demonstrate 50%-96% and 10%-50% increase in the classification accuracy of the perturbed images generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly used CNN (Conv2D(64, 8×8)-Conv2D(128, 6×6)-Conv2D(128, 5×5) - Dense(10) - Softmax()) available in Cleverhans library.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果