Large language models for software engineering: A systematic literature review
Large Language Models (LLMs) have significantly impacted numerous domains, including
Software Engineering (SE). Many recent publications have explored LLMs applied to …
Software Engineering (SE). Many recent publications have explored LLMs applied to …
Software testing with large language models: Survey, landscape, and vision
Pre-trained large language models (LLMs) have recently emerged as a breakthrough
technology in natural language processing and artificial intelligence, with the ability to …
technology in natural language processing and artificial intelligence, with the ability to …
No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence
Pre-trained models have been shown effective in many code intelligence tasks. These
models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream …
models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream …
Data quality for software vulnerability datasets
The use of learning-based techniques to achieve automated software vulnerability detection
has been of longstanding interest within the software security domain. These data-driven …
has been of longstanding interest within the software security domain. These data-driven …
Large language models for cyber security: A systematic literature review
The rapid advancement of Large Language Models (LLMs) has opened up new
opportunities for leveraging artificial intelligence in various domains, including cybersecurity …
opportunities for leveraging artificial intelligence in various domains, including cybersecurity …
Are we building on the rock? on the importance of data preprocessing for code summarization
Code summarization, the task of generating useful comments given the code, has long been
of interest. Most of the existing code summarization models are trained and validated on …
of interest. Most of the existing code summarization models are trained and validated on …
Cct5: A code-change-oriented pre-trained model
Software is constantly changing, requiring developers to perform several derived tasks in a
timely manner, such as writing a description for the intention of the code change, or …
timely manner, such as writing a description for the intention of the code change, or …
Prompt tuning in code intelligence: An experimental evaluation
Pre-trained models have been shown effective in many code intelligence tasks, such as
automatic code summarization and defect prediction. These models are pre-trained on large …
automatic code summarization and defect prediction. These models are pre-trained on large …
Keeping pace with ever-increasing data: Towards continual learning of code intelligence models
Previous research on code intelligence usually trains a deep learning model on a fixed
dataset in an offline manner. However, in real-world scenarios, new code repositories …
dataset in an offline manner. However, in real-world scenarios, new code repositories …
Transrepair: Context-aware program repair for compilation errors
Automatically fixing compilation errors can greatly raise the productivity of software
development, by guiding the novice or AI programmers to write and debug code. Recently …
development, by guiding the novice or AI programmers to write and debug code. Recently …