I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

Machine learning and blockchain technologies for cybersecurity in connected vehicles

J Ahmad, MU Zia, IH Naqvi, JN Chattha… - … : Data Mining and …, 2024 - Wiley Online Library
Future connected and autonomous vehicles (CAVs) must be secured against cyberattacks
for their everyday functions on the road so that safety of passengers and vehicles can be …

High accuracy and high fidelity extraction of neural networks

M Jagielski, N Carlini, D Berthelot, A Kurakin… - 29th USENIX security …, 2020 - usenix.org
In a model extraction attack, an adversary steals a copy of a remotely deployed machine
learning model, given oracle prediction access. We taxonomize model extraction attacks …

Entangled watermarks as a defense against model extraction

H Jia, CA Choquette-Choo, V Chandrasekaran… - 30th USENIX security …, 2021 - usenix.org
Machine learning involves expensive data collection and training procedures. Model owners
may be concerned that valuable intellectual property can be leaked if adversaries mount …

Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks

DJ Miller, Z Xiang, G Kesidis - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
With wide deployment of machine learning (ML)-based systems for a variety of applications
including medical, military, automotive, genomic, multimedia, and social networking, there is …

Protecting intellectual property of language generation apis with lexical watermark

X He, Q Xu, L Lyu, F Wu, C Wang - … of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Nowadays, due to the breakthrough in natural language generation (NLG), including
machine translation, document summarization, image captioning, etc NLG models have …

Prediction poisoning: Towards defenses against dnn model stealing attacks

T Orekondy, B Schiele, M Fritz - arXiv preprint arXiv:1906.10908, 2019 - arxiv.org
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-
world applications eg, cloud prediction APIs. Recent advances in model functionality …

Proof-of-learning: Definitions and practice

H Jia, M Yaghini, CA Choquette-Choo… - … IEEE Symposium on …, 2021 - ieeexplore.ieee.org
Training machine learning (ML) models typically involves expensive iterative optimization.
Once the model's final parameters are released, there is currently no mechanism for the …

Exploring connections between active learning and model extraction

V Chandrasekaran, K Chaudhuri, I Giacomelli… - 29th USENIX Security …, 2020 - usenix.org
Machine learning is being increasingly used by individuals, research institutions, and
corporations. This has resulted in the surge of Machine Learning-as-a-Service (MLaaS) …

Imitation attacks and defenses for black-box machine translation systems

E Wallace, M Stern, D Song - arXiv preprint arXiv:2004.15015, 2020 - arxiv.org
Adversaries may look to steal or attack black-box NLP systems, either for financial gain or to
exploit model errors. One setting of particular interest is machine translation (MT), where …