Optimization-based prompt injection attack to llm-as-a-judge

J Shi, Z Yuan, Y Liu, Y Huang, P Zhou, L Sun… - Proceedings of the …, 2024 - dl.acm.org
LLM-as-a-Judge uses a large language model (LLM) to select the best response from a set
of candidates for a given question. LLM-as-a-Judge has many applications such as LLM …

Llm-pbe: Assessing data privacy in large language models

Q Li, J Hong, C Xie, J Tan, R Xin, J Hou, X Yin… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have become integral to numerous domains, significantly
advancing applications in data management, mining, and analysis. Their profound …

On the (in) security of llm app stores

X Hou, Y Zhao, H Wang - arXiv preprint arXiv:2407.08422, 2024 - arxiv.org
LLM app stores have seen rapid growth, leading to the proliferation of numerous custom
LLM apps. However, this expansion raises security concerns. In this study, we propose a …

Promptfuzz: Harnessing fuzzing techniques for robust testing of prompt injection in llms

J Yu, Y Shao, H Miao, J Shi, X Xing - arXiv preprint arXiv:2409.14729, 2024 - arxiv.org
Large Language Models (LLMs) have gained widespread use in various applications due to
their powerful capability to generate human-like text. However, prompt injection attacks …

Reconstruction of Differentially Private Text Sanitization via Large Language Models

S Pang, Z Lu, H Wang, P Fu, Y Zhou, M Xue… - arXiv preprint arXiv …, 2024 - arxiv.org
Differential privacy (DP) is the de facto privacy standard against privacy leakage attacks,
including many recently discovered ones against large language models (LLMs). However …

Navigating the risks: A survey of security, privacy, and ethics threats in llm-based agents

Y Gan, Y Yang, Z Ma, P He, R Zeng, Y Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
With the continuous development of large language models (LLMs), transformer-based
models have made groundbreaking advances in numerous natural language processing …

Rag-thief: Scalable extraction of private data from retrieval-augmented generation applications with agent-based attacks

C Jiang, X Pan, G Hong, C Bao, M Yang - arXiv preprint arXiv:2411.14110, 2024 - arxiv.org
While large language models (LLMs) have achieved notable success in generative tasks,
they still face limitations, such as lacking up-to-date knowledge and producing …

Governing Open Vocabulary Data Leaks Using an Edge LLM through Programming by Example

Q Li, J Wen, H Jin - Proceedings of the ACM on Interactive, Mobile …, 2024 - dl.acm.org
A major concern with integrating large language model (LLM) services (eg, ChatGPT) into
workplaces is that employees may inadvertently leak sensitive information through their …

[HTML][HTML] Data Stealing Attacks against Large Language Models via Backdooring

J He, G Hou, X Jia, Y Chen, W Liao, Y Zhou, R Zhou - Electronics, 2024 - mdpi.com
Large language models (LLMs) have gained immense attention and are being increasingly
applied in various domains. However, this technological leap forward poses serious security …

The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems

L Song, Z Pang, W Wang, Z Wang, XF Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
The wide deployment of Large Language Models (LLMs) has given rise to strong demands
for optimizing their inference performance. Today's techniques serving this purpose primarily …