Stain: Stealthy avenues of attacks on horizontally collaborated convolutional neural network inference and their mitigation
AA Adeyemo, JJ Sanderson, TA Odetola… - IEEE …, 2023 - ieeexplore.ieee.org
IEEE Access, 2023•ieeexplore.ieee.org
With significant potential improvement in device-to-device (D2D) communication due to
improved wireless link capacity (eg, 5G and NextG systems), a collaboration of multiple
edge devices (called horizontal collaboration (HC)) is becoming a reality for real-time Edge
Intelligence (EI). The distributed nature of HC offers an advantage against traditional
adversarial attacks because the adversary does not have access to the entire deep learning
architecture (DLA). Due to the involvement of multiple untrusted edge devices in HC …
improved wireless link capacity (eg, 5G and NextG systems), a collaboration of multiple
edge devices (called horizontal collaboration (HC)) is becoming a reality for real-time Edge
Intelligence (EI). The distributed nature of HC offers an advantage against traditional
adversarial attacks because the adversary does not have access to the entire deep learning
architecture (DLA). Due to the involvement of multiple untrusted edge devices in HC …
With significant potential improvement in device-to-device (D2D) communication due to improved wireless link capacity (e.g., 5G and NextG systems), a collaboration of multiple edge devices (called horizontal collaboration (HC)) is becoming a reality for real-time Edge Intelligence (EI). The distributed nature of HC offers an advantage against traditional adversarial attacks because the adversary does not have access to the entire deep learning architecture (DLA). Due to the involvement of multiple untrusted edge devices in HC environment, the possibility of malicious devices cannot be eliminated. In this paper, we unearth some attacks that are very effective and stealthy even when the attacker has minimal knowledge of the DLA as is the case in HC-based DLA. We are also providing novel filtering methods to mitigate such attacks. Our novel attacks leverage local information available on output feature maps (FMs) of a targeted edge device to modify the regular adversarial attacks (e.g. Fast Gradient Signed Method (FGSM) and Jacobian-based Saliency Map Attack (JSMA)). Similarly, a customized convolutional neural network (CNN) based filter is empirically designed, developed, and tested. Four different CNN models (LeNet, CapsuleNet, MiniVGGNet, and VGG16) are used to validate the proposed attacks and defense methodologies. Our three attacks on four different CNN models (with two variations of each attack) show a substantial accuracy drop of 62% on average. The proposed filtering approach is able to mitigate the attack by recovering the actual accuracy back to 75.1% on average. To the best of our knowledge, this is the first work that investigates the security vulnerability of DLA in the HC environment, and all three of our attacks are scalable and agnostic to the partition location within the DLA.
ieeexplore.ieee.org