Security and privacy on generative data in aigc: A survey

T Wang, Y Zhang, S Qi, R Zhao, Z Xia… - ACM Computing Surveys, 2024 - dl.acm.org
The advent of artificial intelligence-generated content (AIGC) represents a pivotal moment in
the evolution of information technology. With AIGC, it can be effortless to generate high …

Identifying and mitigating the security risks of generative ai

C Barrett, B Boyd, E Bursztein, N Carlini… - … and Trends® in …, 2023 - nowpublishers.com
Every major technical invention resurfaces the dual-use dilemma—the new technology has
the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such …

Editguard: Versatile image watermarking for tamper localization and copyright protection

X Zhang, R Li, J Yu, Y Xu, W Li… - Proceedings of the …, 2024 - openaccess.thecvf.com
In the era of AI-generated content (AIGC) malicious tampering poses imminent threats to
copyright integrity and information security. Current deep image watermarking while widely …

Towards understanding the interplay of generative artificial intelligence and the internet

G Martínez, L Watson, P Reviriego… - … Workshop on Epistemic …, 2023 - Springer
The rapid adoption of generative Artificial Intelligence (AI) tools that can generate realistic
images or text, such as DALL-E, MidJourney, or ChatGPT, have put the societal impacts of …

Protecting society from AI misuse: when are restrictions on capabilities warranted?

M Anderljung, J Hazell, M von Knebel - AI & SOCIETY, 2024 - Springer
Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more
capable. In fact, AI systems are already starting to help automate fraudulent activities, violate …

A survey on detection of llms-generated content

X Yang, L Pan, X Zhao, H Chen, L Petzold… - arXiv preprint arXiv …, 2023 - arxiv.org
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT
have led to an increase in synthetic content generation with implications across a variety of …

Generative AI models should include detection mechanisms as a condition for public release

A Knott, D Pedreschi, R Chatila, T Chakraborti… - Ethics and Information …, 2023 - Springer
The new wave of 'foundation models'—general-purpose generative AI models, for
production of text (eg, ChatGPT) or images (eg, MidJourney)—represent a dramatic advance …

Leveraging optimization for adaptive attacks on image watermarks

N Lukas, A Diaa, L Fenaux, F Kerschbaum - arXiv preprint arXiv …, 2023 - arxiv.org
Untrustworthy users can misuse image generators to synthesize high-quality deepfakes and
engage in online spam or disinformation campaigns. Watermarking deters misuse by …

Benchmarking the robustness of image watermarks

B An, M Ding, T Rabbani, A Agrawal, Y Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper investigates the weaknesses of image watermarking techniques. We present
WAVES (Watermark Analysis Via Enhanced Stress-testing), a novel benchmark for …

Safe-sd: Safe and traceable stable diffusion with text prompt trigger for invisible generative watermarking

Z Ma, G Jia, B Qi, B Zhou - Proceedings of the 32nd ACM International …, 2024 - dl.acm.org
Recently, stable diffusion (SD) models have typically flourished in the field of image
synthesis and personalized editing, with a range of photorealistic and unprecedented …