Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments

X Bai, X Wang, X Liu, Q Liu, J Song, N Sebe, B Kim - Pattern Recognition, 2021 - Elsevier
Deep learning has recently achieved great success in many visual recognition tasks.
However, the deep neural networks (DNNs) are often perceived as black-boxes, making …

Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Axiom-based grad-cam: Towards accurate visualization and explanation of cnns

R Fu, Q Hu, X Dong, Y Guo, Y Gao, B Li - arXiv preprint arXiv:2008.02312, 2020 - arxiv.org
To have a better understanding and usage of Convolution Neural Networks (CNNs), the
visualization and interpretation of CNNs has attracted increasing attention in recent years. In …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer

S Hu, X Liu, Y Zhang, M Li… - Proceedings of the …, 2022 - openaccess.thecvf.com
While deep face recognition (FR) systems have shown amazing performance in
identification and verification, they also arouse privacy concerns for their excessive …

Clean-label backdoor attacks on video recognition models

S Zhao, X Ma, X Zheng, J Bailey… - Proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide backdoor
triggers in DNNs by poisoning training data. A backdoored model behaves normally on …

Benchmarking adversarial robustness on image classification

Y Dong, QA Fu, X Yang, T Pang… - proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which becomes one of the
most important research problems in the development of deep learning. While a lot of efforts …

Occlusion robust face recognition based on mask learning with pairwise differential siamese network

L Song, D Gong, Z Li, C Liu… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Abstract Deep Convolutional Neural Networks (CNNs) have been pushing the frontier of
face recognition over past years. However, existing CNN models are far less accurate when …

Improving black-box adversarial attacks with a transfer-based prior

S Cheng, Y Dong, T Pang, H Su… - Advances in neural …, 2019 - proceedings.neurips.cc
We consider the black-box adversarial setting, where the adversary has to generate
adversarial perturbations without access to the target models to compute gradients. Previous …