Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks
Adversarial examples have emerged as a significant threat to machine learning algorithms,
especially to the convolutional neural networks (CNNs). In this paper, we propose two …
especially to the convolutional neural networks (CNNs). In this paper, we propose two …
Preventing data poisoning attacks by using generative models
At the present time, machine learning methods have been becoming popular and the usage
areas of these methods have also increased with this popularity. The machine learning …
areas of these methods have also increased with this popularity. The machine learning …
TrISec: training data-unaware imperceptible security attacks on deep neural networks
Most of the data manipulation attacks on deep neural networks (DNNs) during the training
stage introduce a perceptible noise that can be catered by preprocessing during inference …
stage introduce a perceptible noise that can be catered by preprocessing during inference …
Vaws: Vulnerability analysis of neural networks using weight sensitivity
M Hailesellasie, J Nelson, F Khalid… - 2019 IEEE 62nd …, 2019 - ieeexplore.ieee.org
The advancement in deep learning has taken the technology world by storm in the last
decade. Although, there is enormous progress made in terms of algorithm performance, the …
decade. Although, there is enormous progress made in terms of algorithm performance, the …
[PDF][PDF] A saddle-point dynamical system approach for robust deep learning
We propose a novel discrete-time dynamical system-based framework for achieving
adversarial robustness in machine learning models. Our algorithm is originated from robust …
adversarial robustness in machine learning models. Our algorithm is originated from robust …