Building robust machine learning systems: Current progress, research challenges, and opportunities
Machine learning, in particular deep learning, is being used in almost all the aspects of life
to facilitate humans, specifically in mobile and Internet of Things (IoT)-based applications …
to facilitate humans, specifically in mobile and Internet of Things (IoT)-based applications …
Deep learning for edge computing: Current trends, cross-layer optimizations, and open research challenges
In the Machine Learning era, Deep Neural Networks (DNNs) have taken the spotlight, due to
their unmatchable performance in several applications, such as image processing, computer …
their unmatchable performance in several applications, such as image processing, computer …
Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks
Adversarial examples have emerged as a significant threat to machine learning algorithms,
especially to the convolutional neural networks (CNNs). In this paper, we propose two …
especially to the convolutional neural networks (CNNs). In this paper, we propose two …
TrISec: training data-unaware imperceptible security attacks on deep neural networks
Most of the data manipulation attacks on deep neural networks (DNNs) during the training
stage introduce a perceptible noise that can be catered by preprocessing during inference …
stage introduce a perceptible noise that can be catered by preprocessing during inference …
Sscnets: Robustifying dnns using secure selective convolutional filters
Training data is crucial in ensuring robust neural inference, and deep neural networks
(DNNs) are heavily dependent on this assumption. However, DNNs can be exploited by …
(DNNs) are heavily dependent on this assumption. However, DNNs can be exploited by …
[PDF][PDF] Snn under attack: are spiking deep belief networks vulnerable to adversarial examples
Recently, many adversarial examples have emerged for Deep Neural Networks (DNNs)
causing misclassifications. However, indepth work still needs to be performed to …
causing misclassifications. However, indepth work still needs to be performed to …
[PDF][PDF] Black-Box Adversarial Attacks for Deep Neural Networks and Spiking Neural Networks
G Nanfa - 2019 - webthesis.biblio.polito.it
Recently, many adversarial examples have emerged for Deep Neural Networks (DNNs)
causing misclassifications. These perturbations, added to the test inputs, are small and …
causing misclassifications. These perturbations, added to the test inputs, are small and …