Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Differentiable rendering: A survey

H Kato, D Beker, M Morariu, T Ando… - arXiv preprint arXiv …, 2020 - arxiv.org
Deep neural networks (DNNs) have shown remarkable performance improvements on
vision-related tasks such as object detection or image segmentation. Despite their success …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Differentiable monte carlo ray tracing through edge sampling

TM Li, M Aittala, F Durand, J Lehtinen - ACM Transactions on Graphics …, 2018 - dl.acm.org
Gradient-based methods are becoming increasingly important for computer graphics,
machine learning, and computer vision. The ability to compute gradients is crucial to …

Diffusion-based adversarial sample generation for improved stealthiness and controllability

H Xue, A Araujo, B Hu, Y Chen - Advances in Neural …, 2023 - proceedings.neurips.cc
Neural networks are known to be susceptible to adversarial samples: small variations of
natural examples crafted to deliberatelymislead the models. While they can be easily …

Making an invisibility cloak: Real world adversarial attacks on object detectors

Z Wu, SN Lim, LS Davis, T Goldstein - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
We present a systematic study of the transferability of adversarial attacks on state-of-the-art
object detection frameworks. Using standard detection datasets, we train patterns that …

Physically realizable adversarial examples for lidar object detection

J Tu, M Ren, S Manivasagam… - Proceedings of the …, 2020 - openaccess.thecvf.com
Modern autonomous driving systems rely heavily on deep learning models to process point
cloud sensory data; meanwhile, deep models have been shown to be susceptible to …

Adversarial camouflage: Hiding physical-world attacks with natural styles

R Duan, X Ma, Y Wang, J Bailey… - Proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing
works have mostly focused on either digital adversarial examples created via small and …

Provably robust boosted decision stumps and trees against adversarial attacks

M Andriushchenko, M Hein - Advances in neural …, 2019 - proceedings.neurips.cc
The problem of adversarial robustness has been studied extensively for neural networks.
However, for boosted decision trees and decision stumps there are almost no results, even …

Advsim: Generating safety-critical scenarios for self-driving vehicles

J Wang, A Pun, J Tu, S Manivasagam… - Proceedings of the …, 2021 - openaccess.thecvf.com
As self-driving systems become better, simulating scenarios where the autonomy stack may
fail becomes more important. Traditionally, those scenarios are generated for a few scenes …