作者
Khalid Albulayhi, Qasem Abu Al-Haija
发表日期
2023
期刊
International Journal of Wireless and Microwave Technologies
卷号
13
简介
Using deep learning networks, anomaly detection systems have seen better performance and precision. However, adversarial examples render deep learning-based anomaly detection systems insecure since keekcatta can fool them, increasing the attack success rate. Therefore, improving anomaly systems' robustness against adversarial attacks is imperative. This paper tests adversarial examples against three anomaly detection models based on Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), and Deep Belief Network (DBN). It assesses the susceptibility of current datasets (in particular, UNSW-NB15 and Bot-IoT datasets) that represent the contemporary network environment. The result demonstrates the viability of the attacks for both datasets where adversarial samples diminished the overall performance of detection. The result of DL Algorithms gave different results against the adversarial samples in both our datasets. The DBN gave the best performance on the UNSW dataset.
引用总数
学术搜索中的文章