Representation learning with statistical independence to mitigate bias

E Adeli, Q Zhao, A Pfefferbaum… - Proceedings of the …, 2021 - openaccess.thecvf.com
Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in
machine learning applications that has alluded to pivotal debates in recent years. Such …

Analyzing privacy leakage in machine learning via multiple hypothesis testing: A lesson from fano

C Guo, A Sablayrolles… - … Conference on Machine …, 2023 - proceedings.mlr.press
Differential privacy (DP) is by far the most widely accepted framework for mitigating privacy
risks in machine learning. However, exactly how small the privacy parameter $\epsilon …

PECAM: Privacy-enhanced video streaming and analytics via securely-reversible transformation

H Wu, X Tian, M Li, Y Liu… - Proceedings of the 27th …, 2021 - dl.acm.org
As Video Streaming and Analytics (VSA) systems become increasingly popular, serious
privacy concerns have risen on exposing too much unnecessary private information to the …

Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release

A Singh, P Vepakomma, V Sharma… - Advances in Neural …, 2023 - proceedings.neurips.cc
Cloud-based machine learning inference is an emerging paradigm where users query by
sending their data through a service provider who runs an ML model on that data and …

Privacy-preserving deep action recognition: An adversarial learning framework and a new dataset

Z Wu, H Wang, Z Wang, H Jin… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
We investigate privacy-preserving, video-based action recognition in deep learning, a
problem with growing importance in smart camera applications. A novel adversarial training …

Deep fair models for complex data: Graphs labeling and explainable face recognition

D Franco, N Navarin, M Donini, D Anguita, L Oneto - Neurocomputing, 2022 - Elsevier
The central goal of Algorithmic Fairness is to develop AI-based systems which do not
discriminate subgroups in the population with respect to one or multiple notions of inequity …

Disco: Dynamic and invariant sensitive channel obfuscation for deep neural networks

A Singh, A Chopra, E Garza, E Zhang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Recent deep learning models have shown remarkable performance in image classification.
While these deep learning systems are getting closer to practical deployment, the common …

Parallel successive learning for dynamic distributed model training over heterogeneous wireless networks

S Hosseinalipour, S Wang, N Michelusi… - IEEE/ACM …, 2023 - ieeexplore.ieee.org
Federated learning (FedL) has emerged as a popular technique for distributing model
training over a set of wireless devices, via iterative local updates (at devices) and global …

Bounding the invertibility of privacy-preserving instance encoding using fisher information

K Maeng, C Guo, S Kariyappa… - Advances in Neural …, 2024 - proceedings.neurips.cc
Privacy-preserving instance encoding aims to encode raw data into feature vectors without
revealing their privacy-sensitive information. When designed properly, these encodings can …

Balancing biases and preserving privacy on balanced faces in the wild

JP Robinson, C Qin, Y Henon… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
There are demographic biases present in current facial recognition (FR) models. To
measure these biases across different ethnic and gender subgroups, we introduce our …