Improving neural network verification through spurious region guided refinement
We propose a spurious region guided refinement approach for robustness verification of
deep neural networks. Our method starts with applying the DeepPoly abstract domain to …
deep neural networks. Our method starts with applying the DeepPoly abstract domain to …
QVIP: an ILP-based formal verification approach for quantized neural networks
Deep learning has become a promising programming paradigm in software development,
owing to its surprising performance in solving many challenging tasks. Deep neural …
owing to its surprising performance in solving many challenging tasks. Deep neural …
BDD4BNN: a BDD-based quantitative analysis framework for binarized neural networks
Verifying and explaining the behavior of neural networks is becoming increasingly
important, especially when they are deployed in safety-critical applications. In this paper, we …
important, especially when they are deployed in safety-critical applications. In this paper, we …
TrajPAC: Towards Robustness Verification of Pedestrian Trajectory Prediction Models
Robust pedestrian trajectory forecasting is crucial to developing safe autonomous vehicles.
Although previous works have studied adversarial robustness in the context of trajectory …
Although previous works have studied adversarial robustness in the context of trajectory …
What, indeed, is an achievable provable guarantee for learning-enabled safety-critical systems
Abstract Machine learning has made remarkable advancements, but confidently utilising
learning-enabled components in safety-critical domains still poses challenges. Among the …
learning-enabled components in safety-critical domains still poses challenges. Among the …
QEBVerif: Quantization error bound verification of neural networks
To alleviate the practical constraints for deploying deep neural networks (DNNs) on edge
devices, quantization is widely regarded as one promising technique. It reduces the …
devices, quantization is widely regarded as one promising technique. It reduces the …
Caisar: A platform for characterizing artificial intelligence safety and robustness
We present CAISAR, an open-source platform under active development for the
characterization of AI systems' robustness and safety. CAISAR provides a unified entry point …
characterization of AI systems' robustness and safety. CAISAR provides a unified entry point …
Enhancing robustness verification for deep neural networks via symbolic propagation
Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable
to small perturbations on the inputs. This has led to safety concerns on applying DNNs to …
to small perturbations on the inputs. This has led to safety concerns on applying DNNs to …
A declarative metamorphic testing framework for autonomous driving
Autonomous driving has gained much attention from both industry and academia. Currently,
Deep Neural Networks (DNNs) are widely used for perception and control in autonomous …
Deep Neural Networks (DNNs) are widely used for perception and control in autonomous …