Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks

F Khalid, H Ali, H Tariq, MA Hanif… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
Adversarial examples have emerged as a significant threat to machine learning algorithms,
especially to the convolutional neural networks (CNNs). In this paper, we propose two …

Preventing data poisoning attacks by using generative models

M Aladag, FO Catak, E Gul - 2019 1St International informatics …, 2019 - ieeexplore.ieee.org
At the present time, machine learning methods have been becoming popular and the usage
areas of these methods have also increased with this popularity. The machine learning …

TrISec: training data-unaware imperceptible security attacks on deep neural networks

F Khalid, MA Hanif, S Rehman… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
Most of the data manipulation attacks on deep neural networks (DNNs) during the training
stage introduce a perceptible noise that can be catered by preprocessing during inference …

Vaws: Vulnerability analysis of neural networks using weight sensitivity

M Hailesellasie, J Nelson, F Khalid… - 2019 IEEE 62nd …, 2019 - ieeexplore.ieee.org
The advancement in deep learning has taken the technology world by storm in the last
decade. Although, there is enormous progress made in terms of algorithm performance, the …

[PDF][PDF] A saddle-point dynamical system approach for robust deep learning

Y Esfandiari, K Ebrahimi, A Balu, N Elia… - arXiv preprint arXiv …, 2019 - core.ac.uk
We propose a novel discrete-time dynamical system-based framework for achieving
adversarial robustness in machine learning models. Our algorithm is originated from robust …