Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2023 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

A survey of text watermarking in the era of large language models

A Liu, L Pan, Y Lu, J Li, X Hu, X Zhang, L Wen… - ACM Computing …, 2024 - dl.acm.org
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …

Monitoring ai-modified content at scale: A case study on the impact of chatgpt on ai conference peer reviews

W Liang, Z Izzo, Y Zhang, H Lepp, H Cao… - arXiv preprint arXiv …, 2024 - arxiv.org
We present an approach for estimating the fraction of text in a large corpus which is likely to
be substantially modified or produced by a large language model (LLM). Our maximum …

A survey on detection of llms-generated content

X Yang, L Pan, X Zhao, H Chen, L Petzold… - arXiv preprint arXiv …, 2023 - arxiv.org
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT
have led to an increase in synthetic content generation with implications across a variety of …

Watermark stealing in large language models

N Jovanović, R Staab, M Vechev - arXiv preprint arXiv:2402.19361, 2024 - arxiv.org
LLM watermarking has attracted attention as a promising way to detect AI-generated
content, with some works suggesting that current schemes may already be fit for …

Stumbling blocks: Stress testing the robustness of machine-generated text detectors under attacks

Y Wang, S Feng, AB Hou, X Pu, C Shen, X Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
The widespread use of large language models (LLMs) is increasing the demand for
methods that detect machine-generated text to prevent misuse. The goal of our study is to …

Adaptive text watermark for large language models

Y Liu, Y Bu - arXiv preprint arXiv:2401.13927, 2024 - arxiv.org
The advancement of Large Language Models (LLMs) has led to increasing concerns about
the misuse of AI-generated text, and watermarking for LLM-generated text has emerged as a …

Improving the generation quality of watermarked large language models via word importance scoring

Y Li, Y Wang, Z Shi, CJ Hsieh - arXiv preprint arXiv:2311.09668, 2023 - arxiv.org
The strong general capabilities of Large Language Models (LLMs) bring potential ethical
risks if they are unrestrictedly accessible to malicious users. Token-level watermarking …

Large Language Model Watermark Stealing With Mixed Integer Programming

Z Zhang, X Zhang, Y Zhang, LY Zhang, C Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
The Large Language Model (LLM) watermark is a newly emerging technique that shows
promise in addressing concerns surrounding LLM copyright, monitoring AI-generated text …

Measuring Human Contribution in AI-Assisted Content Generation

Y Xie, T Qi, J Yi, R Whalen, J Huang, Q Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
With the growing prevalence of generative artificial intelligence (AI), an increasing amount of
content is no longer exclusively generated by humans but by generative AI models with …