Large language models for software engineering: A systematic literature review

X Hou, Y Zhao, Y Liu, Z Yang, K Wang, L Li… - ACM Transactions on …, 2023 - dl.acm.org
Large Language Models (LLMs) have significantly impacted numerous domains, including
Software Engineering (SE). Many recent publications have explored LLMs applied to …

Software testing with large language models: Survey, landscape, and vision

J Wang, Y Huang, C Chen, Z Liu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Pre-trained large language models (LLMs) have recently emerged as a breakthrough
technology in natural language processing and artificial intelligence, with the ability to …

No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence

C Wang, Y Yang, C Gao, Y Peng, H Zhang… - Proceedings of the 30th …, 2022 - dl.acm.org
Pre-trained models have been shown effective in many code intelligence tasks. These
models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream …

Data quality for software vulnerability datasets

R Croft, MA Babar, MM Kholoosi - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
The use of learning-based techniques to achieve automated software vulnerability detection
has been of longstanding interest within the software security domain. These data-driven …

Large language models for cyber security: A systematic literature review

HX Xu, SA Wang, N Li, K Wang, Y Zhao, K Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid advancement of Large Language Models (LLMs) has opened up new
opportunities for leveraging artificial intelligence in various domains, including cybersecurity …

Are we building on the rock? on the importance of data preprocessing for code summarization

L Shi, F Mu, X Chen, S Wang, J Wang, Y Yang… - Proceedings of the 30th …, 2022 - dl.acm.org
Code summarization, the task of generating useful comments given the code, has long been
of interest. Most of the existing code summarization models are trained and validated on …

Cct5: A code-change-oriented pre-trained model

B Lin, S Wang, Z Liu, Y Liu, X Xia, X Mao - Proceedings of the 31st ACM …, 2023 - dl.acm.org
Software is constantly changing, requiring developers to perform several derived tasks in a
timely manner, such as writing a description for the intention of the code change, or …

Prompt tuning in code intelligence: An experimental evaluation

C Wang, Y Yang, C Gao, Y Peng… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained models have been shown effective in many code intelligence tasks, such as
automatic code summarization and defect prediction. These models are pre-trained on large …

Keeping pace with ever-increasing data: Towards continual learning of code intelligence models

S Gao, H Zhang, C Gao, C Wang - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
Previous research on code intelligence usually trains a deep learning model on a fixed
dataset in an offline manner. However, in real-world scenarios, new code repositories …

Transrepair: Context-aware program repair for compilation errors

X Li, S Liu, R Feng, G Meng, X Xie, K Chen… - Proceedings of the 37th …, 2022 - dl.acm.org
Automatically fixing compilation errors can greatly raise the productivity of software
development, by guiding the novice or AI programmers to write and debug code. Recently …