RED-Attack: Resource efficient decision based attack for machine learning

F Khalid, H Ali, MA Hanif, S Rehman, R Ahmed… - arXiv preprint arXiv …, 2019 - arxiv.org
Due to data dependency and model leakage properties, Deep Neural Networks (DNNs)
exhibit several security vulnerabilities. Several security attacks exploited them but most of …

Qeba: Query-efficient boundary-based blackbox attack

H Li, X Xu, X Zhang, S Yang… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Abstract Machine learning (ML), especially deep neural networks (DNNs) have been widely
used in various applications, including several safety-critical ones (eg autonomous driving) …

Query-efficient meta attack to deep neural networks

J Du, H Zhang, JT Zhou, Y Yang, J Feng - arXiv preprint arXiv:1906.02398, 2019 - arxiv.org
Black-box attack methods aim to infer suitable attack patterns to targeted DNN models by
only using output feedback of the models and the corresponding input queries. However …

Fadec: A fast decision-based attack for adversarial machine learning

F Khalid, H Ali, MA Hanif, S Rehman… - … Joint Conference on …, 2020 - ieeexplore.ieee.org
Due to the excessive use of cloud-based machine learning (ML) services, the smart cyber-
physical systems (CPS) are increasingly becoming vulnerable to black-box attacks on their …

[HTML][HTML] Abcattack: a gradient-free optimization black-box attack for fooling deep image classifiers

H Cao, C Si, Q Sun, Y Liu, S Li, P Gope - Entropy, 2022 - mdpi.com
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to
adversarial perturbation and may cause classification task failure. In this work, we propose …

[HTML][HTML] Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty

OF Tuna, FO Catak, MT Eskil - Complex & Intelligent Systems, 2023 - Springer
Although state-of-the-art deep neural network models are known to be robust to random
perturbations, it was verified that these architectures are indeed quite vulnerable to …

On the effectiveness of small input noise for defending against query-based black-box attacks

J Byun, H Go, C Kim - Proceedings of the IEEE/CVF winter …, 2022 - openaccess.thecvf.com
While deep neural networks show unprecedented performance in various tasks, the
vulnerability to adversarial examples hinders their deployment in safety-critical systems …

TrISec: training data-unaware imperceptible security attacks on deep neural networks

F Khalid, MA Hanif, S Rehman… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
Most of the data manipulation attacks on deep neural networks (DNNs) during the training
stage introduce a perceptible noise that can be catered by preprocessing during inference …

Learning adversary-resistant deep neural networks

Q Wang, W Guo, K Zhang, AG Ororbia II, X Xing… - arXiv preprint arXiv …, 2016 - arxiv.org
Deep neural networks (DNNs) have proven to be quite effective in a vast array of machine
learning tasks, with recent examples in cyber security and autonomous vehicles. Despite the …

Random directional attack for fooling deep neural networks

W Luo, C Wu, N Zhou, L Ni - arXiv preprint arXiv:1908.02658, 2019 - arxiv.org
Deep neural networks (DNNs) have been widely used in many fields such as images
processing, speech recognition; however, they are vulnerable to adversarial examples, and …