作者
Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Junaid Qadir, Muhammad Shafique
发表日期
2019/3/25
研讨会论文
2019 Design, Automation & Test in Europe Conference & Exhibition (DATE)
页码范围
902-907
出版商
IEEE
简介
Deep neural networks (DNN)-based machine learning (ML) algorithms have recently emerged as the leading ML paradigm particularly for the task of classification due to their superior capability of learning efficiently from large dataseis. The discovery of a number of well-known attacks such as dataset poisoning, adversarial examples, and network manipulation (through the addition of malicious nodes) has, however, put the spotlight squarely on the lack of security in DNN-based ML systems. In particular, malicious actors can use these well-known attacks to cause random/targeted misclassification, or cause a change in the prediction confidence, by only slightly but systematically manipulating the environmental parameters, inference data, or the data acquisition block. Most of the prior adversarial attacks have, however, not accounted for the pre-processing noise filters commonly integrated with the ML-inference …
引用总数
2019202020212022202320247611372
学术搜索中的文章
F Khalid, MA Hanif, S Rehman, J Qadir, M Shafique - 2019 Design, Automation & Test in Europe Conference …, 2019