Privacy-preserving machine learning: Methods, challenges and directions
Machine learning (ML) is increasingly being adopted in a wide variety of application
domains. Usually, a well-performing ML model relies on a large volume of training data and …
domains. Usually, a well-performing ML model relies on a large volume of training data and …
Cheetah: Lean and fast secure {Two-Party} deep neural network inference
Secure two-party neural network inference (2PC-NN) can offer privacy protection for both the
client and the server and is a promising technique in the machine-learning-as-a-service …
client and the server and is a promising technique in the machine-learning-as-a-service …
Towards practical secure neural network inference: the journey so far and the road ahead
Neural networks (NNs) have become one of the most important tools for artificial
intelligence. Well-designed and trained NNs can perform inference (eg, make decisions or …
intelligence. Well-designed and trained NNs can perform inference (eg, make decisions or …
Iron: Private inference on transformers
We initiate the study of private inference on Transformer-based models in the client-server
setting, where clients have private inputs and servers hold proprietary models. Our main …
setting, where clients have private inputs and servers hold proprietary models. Our main …
Elsa: Secure aggregation for federated learning with malicious actors
Federated learning (FL) is an increasingly popular approach for machine learning (ML) in
cases where the training dataset is highly distributed. Clients perform local training on their …
cases where the training dataset is highly distributed. Clients perform local training on their …
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Bolt: Privacy-preserving, accurate and efficient inference for transformers
The advent of transformers has brought about significant advancements in traditional
machine learning tasks. However, their pervasive deployment has raised concerns about …
machine learning tasks. However, their pervasive deployment has raised concerns about …
Experimenting with zero-knowledge proofs of training
How can a model owner prove they trained their model according to the correct
specification? More importantly, how can they do so while preserving the privacy of the …
specification? More importantly, how can they do so while preserving the privacy of the …
SoK: cryptographic neural-network computation
We studied 53 privacy-preserving neural-network papers in 2016-2022 based on
cryptography (without trusted processors or differential privacy), 16 of which only use …
cryptography (without trusted processors or differential privacy), 16 of which only use …
Bumblebee: Secure two-party inference framework for large transformers
Large transformer-based models have realized state-of-the-art performance on lots of real-
world tasks such as natural language processing and computer vision. However, with the …
world tasks such as natural language processing and computer vision. However, with the …