BET: black-box efficient testing for convolutional neural networks

J Wang, H Qiu, Y Rong, H Ye, Q Li, Z Li… - Proceedings of the 31st …, 2022 - dl.acm.org
J Wang, H Qiu, Y Rong, H Ye, Q Li, Z Li, C Zhang
Proceedings of the 31st ACM SIGSOFT International Symposium on Software …, 2022dl.acm.org
It is important to test convolutional neural networks (CNNs) to identify defects (eg error-
inducing inputs) before deploying them in security-sensitive scenarios. Although existing
white-box testing methods can effectively test CNN models with high neuron coverage, they
are not applicable to privacy-sensitive scenarios where full knowledge of target CNN models
is lacking. In this work, we propose a novel Black-box Efficient Testing (BET) method for
CNN models. The core insight of BET is that CNNs are generally prone to be affected by …
It is important to test convolutional neural networks (CNNs) to identify defects (e.g. error-inducing inputs) before deploying them in security-sensitive scenarios. Although existing white-box testing methods can effectively test CNN models with high neuron coverage, they are not applicable to privacy-sensitive scenarios where full knowledge of target CNN models is lacking. In this work, we propose a novel Black-box Efficient Testing (BET) method for CNN models. The core insight of BET is that CNNs are generally prone to be affected by continuous perturbations. Thus, by generating such continuous perturbations in a black-box manner, we design a tunable objective function to guide our testing process for thoroughly exploring defects in different decision boundaries of the target CNN models. We further design an efficiency-centric policy to find more error-inducing inputs within a fixed query budget. We conduct extensive evaluations with three well-known datasets and five popular CNN structures. The results show that BET significantly outperforms existing white-box and black-box testing methods considering the effective error-inducing inputs found in a fixed query/inference budget. We further show that the error-inducing inputs found by BET can be used to fine-tune the target model, improving its accuracy by up to 3%.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果