Survey of vulnerabilities in large language models revealed by adversarial attacks
Large Language Models (LLMs) are swiftly advancing in architecture and capability, and as
they integrate more deeply into complex systems, the urgency to scrutinize their security …
they integrate more deeply into complex systems, the urgency to scrutinize their security …
A new era in llm security: Exploring security concerns in real-world llm-based systems
Large Language Model (LLM) systems are inherently compositional, with individual LLM
serving as the core foundation with additional layers of objects such as plugins, sandbox …
serving as the core foundation with additional layers of objects such as plugins, sandbox …
A comprehensive survey of attack techniques, implementation, and mitigation strategies in large language models
A Esmradi, DW Yip, CF Chan - International Conference on Ubiquitous …, 2023 - Springer
Ensuring the security of large language models (LLMs) is an ongoing challenge despite
their widespread popularity. Developers work to enhance LLMs security, but vulnerabilities …
their widespread popularity. Developers work to enhance LLMs security, but vulnerabilities …
Comprehensive evaluation of chatgpt reliability through multilingual inquiries
PCR Puttaparthi, SS Deo, H Gul, Y Tang… - arXiv preprint arXiv …, 2023 - arxiv.org
ChatGPT is currently the most popular large language model (LLM), with over 100 million
users, making a significant impact on people's lives. However, due to the presence of …
users, making a significant impact on people's lives. However, due to the presence of …
Automatic and universal prompt injection attacks against large language models
Large Language Models (LLMs) excel in processing and generating human language,
powered by their ability to interpret and follow instructions. However, their capabilities can …
powered by their ability to interpret and follow instructions. However, their capabilities can …
Maatphor: Automated variant analysis for prompt injection attacks
Prompt injection has emerged as a serious security threat to large language models (LLMs).
At present, the current best-practice for defending against newly-discovered prompt injection …
At present, the current best-practice for defending against newly-discovered prompt injection …
VTQAGen: BART-based Generative Model For Visual Text Question Answering
H Chen, T Wan, Z Lin, K Xu, J Wang… - Proceedings of the 31st …, 2023 - dl.acm.org
Visual Text Question Answering (VTQA) is a challenging task that requires answering
questions pertaining to visual content by combining image understanding and language …
questions pertaining to visual content by combining image understanding and language …
Crowdsourced Data Collection Opens New Avenues for the Behavioral Sciences to Impact Real-World Applications
DJ Kravitz, SR Mitroff… - Policy Insights from …, 2024 - journals.sagepub.com
The behavioral sciences have had great success in their study of the mechanisms that drive
behavior. However, they have had less impact on applied settings or policy. This gap results …
behavior. However, they have had less impact on applied settings or policy. This gap results …
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective
Large Language Model-based systems (LLM systems) are information and query
processing systems that use LLMs to plan operations from natural-language prompts and …
processing systems that use LLMs to plan operations from natural-language prompts and …
Assessing Cybersecurity Vulnerabilities in Code Large Language Models
Instruction-tuned Code Large Language Models (Code LLMs) are increasingly utilized as AI
coding assistants and integrated into various applications. However, the cybersecurity …
coding assistants and integrated into various applications. However, the cybersecurity …