Model-agnostic reachability analysis on deep neural networks

C Zhang, W Ruan, F Wang, P Xu, G Min… - Pacific-Asia Conference …, 2023 - Springer
Verification plays an essential role in the formal analysis of safety-critical systems. Most
current verification methods have specific requirements when working on Deep Neural …

Reachability analysis of deep neural networks with provable guarantees

W Ruan, X Huang, M Kwiatkowska - arXiv preprint arXiv:1805.02242, 2018 - arxiv.org
Verifying correctness of deep neural networks (DNNs) is challenging. We study a generic
reachability problem for feed-forward DNNs which, for a given set of inputs to the network …

Verification of recurrent neural networks with star reachability

HD Tran, SW Choi, X Yang, T Yamaguchi… - Proceedings of the 26th …, 2023 - dl.acm.org
The paper extends the recent star reachability method to verify the robustness of recurrent
neural networks (RNNs) for use in safety-critical applications. RNNs are a popular machine …

Peregrinn: Penalized-relaxation greedy neural network verifier

H Khedr, J Ferlez, Y Shoukry - … Conference, CAV 2021, Virtual Event, July …, 2021 - Springer
Abstract Neural Networks (NNs) have increasingly apparent safety implications
commensurate with their proliferation in real-world applications: both unanticipated as well …

Verification-Friendly Deep Neural Networks

A Baninajjar, A Rezine, A Aminifar - arXiv preprint arXiv:2312.09748, 2023 - arxiv.org
Machine learning techniques often lack formal correctness guarantees. This is evidenced by
the widespread adversarial examples that plague most deep-learning applications. This …

An abstraction-based framework for neural network verification

YY Elboher, J Gottschlich, G Katz - … , CAV 2020, Los Angeles, CA, USA …, 2020 - Springer
Deep neural networks are increasingly being used as controllers for safety-critical systems.
Because neural networks are opaque, certifying their correctness is a significant challenge …

Verifying Global Two-Safety Properties in Neural Networks with Confidence

A Athavale, E Bartocci, M Christakis, M Maffei… - arXiv preprint arXiv …, 2024 - arxiv.org
We present the first automated verification technique for confidence-based 2-safety
properties, such as global robustness and global fairness, in deep neural networks (DNNs) …

Toward scalable verification for safety-critical deep networks

L Kuper, G Katz, J Gottschlich, K Julian… - arXiv preprint arXiv …, 2018 - arxiv.org
The increasing use of deep neural networks for safety-critical applications, such as
autonomous driving and flight control, raises concerns about their safety and reliability …

VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees

A Baninajjar, A Rezine, A Aminifar - Forty-first International Conference on … - openreview.net
Machine learning techniques often lack formal correctness guarantees, evidenced by the
widespread adversarial examples that plague most deep-learning applications. This lack of …

Taming reachability analysis of dnn-controlled systems via abstraction-based training

J Tian, D Zhi, S Liu, P Wang, G Katz… - … Conference on Verification …, 2023 - Springer
The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not
only the networks themselves but also the hosting DNN-controlled systems. Reachability …