Building robust machine learning systems: Current progress, research challenges, and opportunities

JJ Zhang, K Liu, F Khalid, MA Hanif… - Proceedings of the 56th …, 2019 - dl.acm.org
Machine learning, in particular deep learning, is being used in almost all the aspects of life
to facilitate humans, specifically in mobile and Internet of Things (IoT)-based applications …

Deep learning for edge computing: Current trends, cross-layer optimizations, and open research challenges

A Marchisio, MA Hanif, F Khalid… - 2019 IEEE Computer …, 2019 - ieeexplore.ieee.org
In the Machine Learning era, Deep Neural Networks (DNNs) have taken the spotlight, due to
their unmatchable performance in several applications, such as image processing, computer …

Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks

F Khalid, H Ali, H Tariq, MA Hanif… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
Adversarial examples have emerged as a significant threat to machine learning algorithms,
especially to the convolutional neural networks (CNNs). In this paper, we propose two …

TrISec: training data-unaware imperceptible security attacks on deep neural networks

F Khalid, MA Hanif, S Rehman… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
Most of the data manipulation attacks on deep neural networks (DNNs) during the training
stage introduce a perceptible noise that can be catered by preprocessing during inference …

Sscnets: Robustifying dnns using secure selective convolutional filters

H Ali, F Khalid, HA Tariq, MA Hanif, R Ahmed… - IEEE Design & …, 2019 - ieeexplore.ieee.org
Training data is crucial in ensuring robust neural inference, and deep neural networks
(DNNs) are heavily dependent on this assumption. However, DNNs can be exploited by …

[PDF][PDF] Snn under attack: are spiking deep belief networks vulnerable to adversarial examples

A Marchisio, G Nanfa, F Khalid, MA Hanif… - arXiv preprint arXiv …, 2019 - researchgate.net
Recently, many adversarial examples have emerged for Deep Neural Networks (DNNs)
causing misclassifications. However, indepth work still needs to be performed to …

[PDF][PDF] Black-Box Adversarial Attacks for Deep Neural Networks and Spiking Neural Networks

G Nanfa - 2019 - webthesis.biblio.polito.it
Recently, many adversarial examples have emerged for Deep Neural Networks (DNNs)
causing misclassifications. These perturbations, added to the test inputs, are small and …