作者
Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh Karri, Siddharth Garg, Brendan Dolan-Gavitt
发表日期
2023
研讨会论文
32nd USENIX Security Symposium (USENIX Security 23)
页码范围
2205-2222
简介
Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers’ code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N= 58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked ‘shopping list’structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.
引用总数
学术搜索中的文章
G Sandoval, H Pearce, T Nys, R Karri, S Garg… - 32nd USENIX Security Symposium (USENIX Security …, 2023
G Sandoval, H Pearce, T Nys, R Karri, B Dolan-Gavitt… - arXiv preprint arXiv:2208.09727, 2022