On the privacy and security for e-health services in the metaverse: An overview

M Letafati, S Otoum - Ad Hoc Networks, 2023 - Elsevier
Metaverse-enabled healthcare systems are expected to efficiently utilize an unprecedented
amount of health-related data without disclosing sensitive or private information of …

Visual content privacy protection: A survey

R Zhao, Y Zhang, T Wang, W Wen, Y Xiang… - ACM Computing …, 2023 - dl.acm.org
Vision is the most important sense for people, and it is also one of the main ways of
cognition. As a result, people tend to utilize visual content to capture and share their life …

Glaze: Protecting artists from style mimicry by {Text-to-Image} models

S Shan, J Cryan, E Wenger, H Zheng… - 32nd USENIX Security …, 2023 - usenix.org
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to
displace many in the professional artist community. In particular, models can learn to mimic …

Anti-dreambooth: Protecting users from personalized text-to-image synthesis

T Van Le, H Phung, TH Nguyen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Text-to-image diffusion models are nothing but a revolution, allowing anyone, even without
design skills, to create realistic images from simple text inputs. With powerful personalization …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer

S Hu, X Liu, Y Zhang, M Li… - Proceedings of the …, 2022 - openaccess.thecvf.com
While deep face recognition (FR) systems have shown amazing performance in
identification and verification, they also arouse privacy concerns for their excessive …

Adversarial examples make strong poisons

L Fowl, M Goldblum, P Chiang… - Advances in …, 2021 - proceedings.neurips.cc
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …

On success and simplicity: A second look at transferable targeted attacks

Z Zhao, Z Liu, M Larson - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Achieving transferability of targeted attacks is reputed to be remarkably difficult. The current
state of the art has resorted to resource-intensive solutions that necessitate training model …

Prompt-specific poisoning attacks on text-to-image generative models

S Shan, W Ding, J Passananti, H Zheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …

Image shortcut squeezing: Countering perturbative availability poisons with compression

Z Liu, Z Zhao, M Larson - International conference on …, 2023 - proceedings.mlr.press
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …