Robust anomaly detection and backdoor attack detection via differential privacy

M Du, R Jia, D Song - arXiv preprint arXiv:1911.07116, 2019 - arxiv.org
Outlier detection and novelty detection are two important topics for anomaly detection.
Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and …

Adversarial learning techniques for security and privacy preservation: A comprehensive review

JJ Hathaliya, S Tanwar, P Sharma - Security and Privacy, 2022 - Wiley Online Library
In recent years, the use of smart devices has increased exponentially, resulting in massive
amounts of data. To handle this data, effective data storage and management has required …

CAPE: Context-aware private embeddings for private language learning

R Plant, D Gkatzia, V Giuffrida - arXiv preprint arXiv:2108.12318, 2021 - arxiv.org
Deep learning-based language models have achieved state-of-the-art results in a number of
applications including sentiment analysis, topic labelling, intent classification and others …

Robustness threats of differential privacy

N Tursynbek, A Petiushko, I Oseledets - arXiv preprint arXiv:2012.07828, 2020 - arxiv.org
Differential privacy (DP) is a gold-standard concept of measuring and guaranteeing privacy
in data analysis. It is well-known that the cost of adding DP to deep learning model is its …

Robustness, privacy, and generalization of adversarial training

F He, S Fu, B Wang, D Tao - arXiv preprint arXiv:2012.13573, 2020 - arxiv.org
Adversarial training can considerably robustify deep neural networks to resist adversarial
attacks. However, some works suggested that adversarial training might comprise the …

Dadi: Dynamic discovery of fair information with adversarial reinforcement learning

MA Bakker, DP Tu, HR Valdés, KP Gummadi… - arXiv preprint arXiv …, 2019 - arxiv.org
We introduce a framework for dynamic adversarial discovery of information (DADI),
motivated by a scenario where information (a feature set) is used by third parties with …

[PDF][PDF] Differentially private lifelong learning

NH Phan, T My - Privacy in Machine Learning (PriML), NeurIPS'19 …, 2019 - par.nsf.gov
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in
lifelong learning (L2M) for deep neural networks. Our key idea is to employ functional …

Making Images Resilient to Adversarial Example Attacks

S Tian, Y Cai, F Bao, R Oruganti - International Conference on Artificial …, 2022 - Springer
Adversarial example attacks twist an image to cause image classifiers to output a wrong
prediction, yet the perturbation is too subtle to be perceived by a human. Existing research …

Artificial neural networks in public policy: Towards an analytical framework

JA Lee - 2020 - search.proquest.com
This dissertation assesses how artificial neural networks (ANNs) and other machine learning
systems should be devised, built, and implemented in US governmental organizations (ie …

Improving fairness in budget-constrained algorithmic decision-making

MA Bakker - 2020 - dspace.mit.edu
The last five years have seen a vast increase in academic and popular interest in" fair"
machine learning. But while the community has made significant progress towards …