A fourier perspective on model robustness in computer vision

D Yin, R Gontijo Lopes, J Shlens… - Advances in Neural …, 2019 - proceedings.neurips.cc
Achieving robustness to distributional shift is a longstanding and challenging goal of
computer vision. Data augmentation is a commonly used approach for improving …

Feature distillation: Dnn-oriented jpeg compression against adversarial examples

Z Liu, Q Liu, T Liu, N Xu, X Lin… - 2019 IEEE/CVF …, 2019 - ieeexplore.ieee.org
Image compression-based approaches for defending against the adversarial-example
attacks, which threaten the safety use of deep neural networks (DNN), have been …

Adversarial examples are a natural consequence of test error in noise

J Gilmer, N Ford, N Carlini… - … Conference on Machine …, 2019 - proceedings.mlr.press
Over the last few years, the phenomenon of adversarial examples—maliciously constructed
inputs that fool trained machine learning models—has captured the attention of the research …

Adversarial examples are a natural consequence of test error in noise

N Ford, J Gilmer, N Carlini, D Cubuk - arXiv preprint arXiv:1901.10513, 2019 - arxiv.org
Over the last few years, the phenomenon of adversarial examples---maliciously constructed
inputs that fool trained machine learning models---has captured the attention of the research …

Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities

S Cattell, A Ghosh, LA Kaffee - arXiv preprint arXiv:2402.07039, 2024 - arxiv.org
Harm reporting in Artificial Intelligence (AI) currently lacks a structured process for disclosing
and addressing algorithmic flaws, relying largely on an ad-hoc approach. This contrasts …

Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities

S Cattell, A Ghosh, LA Kaffee - … of the AAAI/ACM Conference on AI …, 2024 - ojs.aaai.org
Abstract Harm reporting in Artificial Intelligence (AI) currently lacks a structured process for
disclosing and addressing algorithmic flaws, relying largely on an ad-hoc approach. This …

Universal adversarial perturbations through the lens of deep steganography: Towards a fourier perspective

C Zhang, P Benz, A Karjauv, IS Kweon - Proceedings of the AAAI …, 2021 - ojs.aaai.org
The booming interest in adversarial attacks stems from a misalignment between human
vision and a deep neural network (DNN),\ie~ a human imperceptible perturbation fools the …

Review on image processing based adversarial example defenses in computer vision

M Qiu, H Qiu - 2020 IEEE 6th Intl Conference on Big Data …, 2020 - ieeexplore.ieee.org
Recent research works showed that deep neural networks are vulnerable to adversarial
examples, which are usually maliciously created by carefully adding deliberate and …

DeSVig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems

G Li, K Ota, M Dong, J Wu, J Li - IEEE Transactions on Industrial …, 2019 - ieeexplore.ieee.org
Individually reinforcing the robustness of a single deep learning model only gives limited
security guarantees especially when facing adversarial examples. In this article, we propose …

MalJPEG: Machine learning based solution for the detection of malicious JPEG images

A Cohen, N Nissim, Y Elovici - IEEE Access, 2020 - ieeexplore.ieee.org
In recent years, cyber-attacks against individuals, businesses, and organizations have
increased. Cyber criminals are always looking for effective vectors to deliver malware to …