[HTML][HTML] Tamp-X: Attacking explainable natural language classifiers through tampered activations
While the technique of Deep Neural Networks (DNNs) has been instrumental in achieving
state-of-the-art results for various Natural Language Processing (NLP) tasks, recent works …
state-of-the-art results for various Natural Language Processing (NLP) tasks, recent works …
fakeWeather: Adversarial attacks for deep neural networks emulating weather conditions on the camera lens of autonomous systems
A Marchisio, G Caramia, M Martina… - … Joint Conference on …, 2022 - ieeexplore.ieee.org
Recently, Deep Neural Networks (DNNs) have achieved remarkable performances in many
applications, while several studies have enhanced their vulnerabilities to malicious attacks …
applications, while several studies have enhanced their vulnerabilities to malicious attacks …
Integrating single-shot Fast Gradient Sign Method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers
Deep learning architectures have emerged as powerful function approximators in a broad
spectrum of complex representation learning tasks, such as, computer vision, natural …
spectrum of complex representation learning tasks, such as, computer vision, natural …
Multi-modal adversarial example detection with transformer
Although deep neural networks have shown great potential for many tasks, they are
vulnerable to adversarial examples, which are generated by adding small perturbations to …
vulnerable to adversarial examples, which are generated by adding small perturbations to …
面向机器学习模型的基于PCA 的成员推理攻击
彭长根, 高婷, 刘惠篮, 丁红发 - 通信学报, 2022 - infocomm-journal.com
针对目前黑盒成员推理攻击存在的访问受限失效问题, 提出基于主成分分析(PCA)
的成员推理攻击. 首先, 针对黑盒成员推理攻击存在的访问受限问题, 提出一种快速决策成员推理 …
的成员推理攻击. 首先, 针对黑盒成员推理攻击存在的访问受限问题, 提出一种快速决策成员推理 …