Provable adversarial robustness for group equivariant tasks: Graphs, point clouds, molecules, and more

J Schuchardt, Y Scholten… - Advances in Neural …, 2023 - proceedings.neurips.cc
A machine learning model is traditionally considered robust if its prediction remains (almost)
constant under input perturbations with small norm. However, real-world tasks like molecular …

Reachability analysis of neural network control systems

C Zhang, W Ruan, P Xu - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
Neural network controllers (NNCs) have shown great promise in autonomous and cyber-
physical systems. Despite the various verification approaches for neural networks, the safety …

Certified policy smoothing for cooperative multi-agent reinforcement learning

R Mu, W Ruan, LS Marcolino, G Jin, Q Ni - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical
scenarios, thus the analysis of robustness for c-MARL models is profoundly important …

Generalizing universal adversarial perturbations for deep neural networks

Y Zhang, W Ruan, F Wang, X Huang - Machine Learning, 2023 - Springer
Previous studies have shown that universal adversarial attacks can fool deep neural
networks over a large set of input images with a single human-invisible perturbation …

Towards verifying the geometric robustness of large-scale neural networks

F Wang, P Xu, W Ruan, X Huang - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric
transformation. This paper aims to verify the robustness of large-scale DNNs against the …

Reward Certification for Policy Smoothed Reinforcement Learning

R Mu, LS Marcolino, Y Zhang, T Zhang… - Proceedings of the …, 2024 - ojs.aaai.org
Reinforcement Learning (RL) has achieved remarkable success in safety-critical areas, but it
can be weakened by adversarial attacks. Recent studies have introduced``smoothed …

CausalPC: Improving the Robustness of Point Cloud Classification by Causal Effect Identification

Y Huang, M Zhang, D Ding, E Jiang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Deep neural networks have demonstrated remarkable performance in point cloud
classification. However previous works show they are vulnerable to adversarial …

Sora: Scalable black-box reachability analyser on neural networks

P Xu, F Wang, W Ruan, C Zhang… - ICASSP 2023-2023 …, 2023 - ieeexplore.ieee.org
The vulnerability of deep neural networks (DNNs) to input perturbations has posed a
significant challenge. Recent work on robustness verification of DNNs not only lacks …

[HTML][HTML] Bridging formal methods and machine learning with model checking and global optimisation

S Bensalem, X Huang, W Ruan, Q Tang, C Wu… - Journal of Logical and …, 2024 - Elsevier
Formal methods and machine learning are two research fields with drastically different
foundations and philosophies. Formal methods utilise mathematically rigorous techniques …

Model-agnostic reachability analysis on deep neural networks

C Zhang, W Ruan, F Wang, P Xu, G Min… - Pacific-Asia Conference …, 2023 - Springer
Verification plays an essential role in the formal analysis of safety-critical systems. Most
current verification methods have specific requirements when working on Deep Neural …