Provable adversarial robustness for group equivariant tasks: Graphs, point clouds, molecules, and more
J Schuchardt, Y Scholten… - Advances in Neural …, 2023 - proceedings.neurips.cc
A machine learning model is traditionally considered robust if its prediction remains (almost)
constant under input perturbations with small norm. However, real-world tasks like molecular …
constant under input perturbations with small norm. However, real-world tasks like molecular …
Reachability analysis of neural network control systems
Neural network controllers (NNCs) have shown great promise in autonomous and cyber-
physical systems. Despite the various verification approaches for neural networks, the safety …
physical systems. Despite the various verification approaches for neural networks, the safety …
Certified policy smoothing for cooperative multi-agent reinforcement learning
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical
scenarios, thus the analysis of robustness for c-MARL models is profoundly important …
scenarios, thus the analysis of robustness for c-MARL models is profoundly important …
Generalizing universal adversarial perturbations for deep neural networks
Previous studies have shown that universal adversarial attacks can fool deep neural
networks over a large set of input images with a single human-invisible perturbation …
networks over a large set of input images with a single human-invisible perturbation …
Towards verifying the geometric robustness of large-scale neural networks
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric
transformation. This paper aims to verify the robustness of large-scale DNNs against the …
transformation. This paper aims to verify the robustness of large-scale DNNs against the …
Reward Certification for Policy Smoothed Reinforcement Learning
Reinforcement Learning (RL) has achieved remarkable success in safety-critical areas, but it
can be weakened by adversarial attacks. Recent studies have introduced``smoothed …
can be weakened by adversarial attacks. Recent studies have introduced``smoothed …
CausalPC: Improving the Robustness of Point Cloud Classification by Causal Effect Identification
Deep neural networks have demonstrated remarkable performance in point cloud
classification. However previous works show they are vulnerable to adversarial …
classification. However previous works show they are vulnerable to adversarial …
Sora: Scalable black-box reachability analyser on neural networks
The vulnerability of deep neural networks (DNNs) to input perturbations has posed a
significant challenge. Recent work on robustness verification of DNNs not only lacks …
significant challenge. Recent work on robustness verification of DNNs not only lacks …
[HTML][HTML] Bridging formal methods and machine learning with model checking and global optimisation
Formal methods and machine learning are two research fields with drastically different
foundations and philosophies. Formal methods utilise mathematically rigorous techniques …
foundations and philosophies. Formal methods utilise mathematically rigorous techniques …
Model-agnostic reachability analysis on deep neural networks
Verification plays an essential role in the formal analysis of safety-critical systems. Most
current verification methods have specific requirements when working on Deep Neural …
current verification methods have specific requirements when working on Deep Neural …