Robust anomaly detection and backdoor attack detection via differential privacy
Outlier detection and novelty detection are two important topics for anomaly detection.
Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and …
Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and …
Adversarial learning techniques for security and privacy preservation: A comprehensive review
In recent years, the use of smart devices has increased exponentially, resulting in massive
amounts of data. To handle this data, effective data storage and management has required …
amounts of data. To handle this data, effective data storage and management has required …
CAPE: Context-aware private embeddings for private language learning
Deep learning-based language models have achieved state-of-the-art results in a number of
applications including sentiment analysis, topic labelling, intent classification and others …
applications including sentiment analysis, topic labelling, intent classification and others …
Robustness threats of differential privacy
N Tursynbek, A Petiushko, I Oseledets - arXiv preprint arXiv:2012.07828, 2020 - arxiv.org
Differential privacy (DP) is a gold-standard concept of measuring and guaranteeing privacy
in data analysis. It is well-known that the cost of adding DP to deep learning model is its …
in data analysis. It is well-known that the cost of adding DP to deep learning model is its …
Robustness, privacy, and generalization of adversarial training
Adversarial training can considerably robustify deep neural networks to resist adversarial
attacks. However, some works suggested that adversarial training might comprise the …
attacks. However, some works suggested that adversarial training might comprise the …
Dadi: Dynamic discovery of fair information with adversarial reinforcement learning
We introduce a framework for dynamic adversarial discovery of information (DADI),
motivated by a scenario where information (a feature set) is used by third parties with …
motivated by a scenario where information (a feature set) is used by third parties with …
[PDF][PDF] Differentially private lifelong learning
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in
lifelong learning (L2M) for deep neural networks. Our key idea is to employ functional …
lifelong learning (L2M) for deep neural networks. Our key idea is to employ functional …
Making Images Resilient to Adversarial Example Attacks
Adversarial example attacks twist an image to cause image classifiers to output a wrong
prediction, yet the perturbation is too subtle to be perceived by a human. Existing research …
prediction, yet the perturbation is too subtle to be perceived by a human. Existing research …
Artificial neural networks in public policy: Towards an analytical framework
JA Lee - 2020 - search.proquest.com
This dissertation assesses how artificial neural networks (ANNs) and other machine learning
systems should be devised, built, and implemented in US governmental organizations (ie …
systems should be devised, built, and implemented in US governmental organizations (ie …
Improving fairness in budget-constrained algorithmic decision-making
MA Bakker - 2020 - dspace.mit.edu
The last five years have seen a vast increase in academic and popular interest in" fair"
machine learning. But while the community has made significant progress towards …
machine learning. But while the community has made significant progress towards …