Structure invariant transformation for better adversarial transferability

X Wang, Z Zhang, J Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Given the severe vulnerability of Deep Neural Networks (DNNs) against adversarial
examples, there is an urgent need for an effective adversarial attack to identify the …

Biasasker: Measuring the bias in conversational ai system

Y Wan, W Wang, P He, J Gu, H Bai… - Proceedings of the 31st …, 2023 - dl.acm.org
Powered by advanced Artificial Intelligence (AI) techniques, conversational AI systems, such
as ChatGPT, and digital assistants like Siri, have been widely deployed in daily life …

Boosting adversarial transferability by block shuffle and rotation

K Wang, X He, W Wang… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Adversarial examples mislead deep neural networks with imperceptible perturbations and
have brought significant threats to deep learning. An important aspect is their transferability …

Boosting adversarial transferability by achieving flat local maxima

Z Ge, H Liu, W Xiaosen, F Shang… - Advances in Neural …, 2023 - proceedings.neurips.cc
Transfer-based attack adopts the adversarial examples generated on the surrogate model to
attack various models, making it applicable in the physical world and attracting increasing …

Rethinking the backward propagation for adversarial transferability

W Xiaosen, K Tong, K He - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Transfer-based attacks generate adversarial examples on the surrogate model, which can
mislead other black-box models without access, making it promising to attack real-world …

Improving the transferability of adversarial examples with arbitrary style transfer

Z Ge, F Shang, H Liu, Y Liu, L Wan, W Feng… - Proceedings of the 31st …, 2023 - dl.acm.org
Deep neural networks are vulnerable to adversarial examples crafted by applying human-
imperceptible perturbations on clean inputs. Although many attack methods can achieve …

An image is worth a thousand toxic words: A metamorphic testing framework for content moderation software

W Wang, J Huang, J Huang, C Chen… - 2023 38th IEEE/ACM …, 2023 - ieeexplore.ieee.org
The exponential growth of social media platforms has brought about a revolution in
communication and content dissemination in human society. Nevertheless, these platforms …

On the robustness of latent diffusion models

J Zhang, Z Xu, S Cui, C Meng, W Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Latent diffusion models achieve state-of-the-art performance on a variety of generative tasks,
such as image synthesis and image editing. However, the robustness of latent diffusion …

[PDF][PDF] Towards Semantics-and Domain-Aware Adversarial Attacks.

J Zhang, YC Huang, W Wu, MR Lyu - IJCAI, 2023 - ijcai.org
Abstract Language models are known to be vulnerable to textual adversarial attacks, which
add humanimperceptible perturbations to the input to mislead DNNs. It is thus imperative to …

Validating multimedia content moderation software via semantic fusion

W Wang, J Huang, C Chen, J Gu, J Zhang… - Proceedings of the …, 2023 - dl.acm.org
The exponential growth of social media platforms, such as Facebook, Instagram, Youtube,
and TikTok, has revolutionized communication and content publication in human society …