Automated repair of programs from large language models

Z Fan, X Gao, M Mirchev… - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
Large language models such as Codex, have shown the capability to produce code for
many programming tasks. However, the success rate of existing models is low, especially for …

Jigsaw: Large language models meet program synthesis

N Jain, S Vaidyanath, A Iyer, N Natarajan… - Proceedings of the 44th …, 2022 - dl.acm.org
Large pre-trained language models such as GPT-3 [10], Codex [11], and Google's language
model [7] are now capable of generating code from natural language specifications of …

Discovering the syntax and strategies of natural language programming with generative language models

E Jiang, E Toh, A Molina, K Olson, C Kayacik… - Proceedings of the …, 2022 - dl.acm.org
In this paper, we present a natural language code synthesis tool, GenLine, backed by 1) a
large generative language model and 2) a set of task-specific prompts that create or change …

Interactive code generation via test-driven user-intent formalization

SK Lahiri, S Fakhoury, A Naik, G Sakkas… - arXiv preprint arXiv …, 2022 - arxiv.org
Large language models (LLMs) have shown great potential in automating significant
aspects of coding by producing natural code from informal natural language (NL) intent …

Repairing bugs in python assignments using large language models

J Zhang, J Cambronero, S Gulwani, V Le… - arXiv preprint arXiv …, 2022 - arxiv.org
Students often make mistakes on their introductory programming assignments as part of
their learning process. Unfortunately, providing custom repairs for these mistakes can …

Satlm: Satisfiability-aided language models using declarative prompting

X Ye, Q Chen, I Dillig, G Durrett - Advances in Neural …, 2024 - proceedings.neurips.cc
Prior work has combined chain-of-thought prompting in large language models (LLMs) with
programmatic representations to perform effective and transparent reasoning. While such an …

Using transfer learning for code-related tasks

A Mastropaolo, N Cooper, DN Palacio… - IEEE Transactions …, 2022 - ieeexplore.ieee.org
Deep learning (DL) techniques have been used to support several code-related tasks such
as code summarization and bug-fixing. In particular, pre-trained transformer models are on …

Flashfill++: Scaling programming by example by cutting to the chase

J Cambronero, S Gulwani, V Le, D Perelman… - Proceedings of the …, 2023 - dl.acm.org
Programming-by-Examples (PBE) involves synthesizing an" intended program" from a small
set of user-provided input-output examples. A key PBE strategy has been to restrict the …

[PDF][PDF] De-hallucinator: Iterative grounding for llm-based code completion

A Eghbali, M Pradel - arXiv preprint arXiv:2401.01701, 2024 - jespereggers.com
Large languages models (LLMs) trained on datasets of publicly available source code have
established a new state-of-the-art in code completion. However, these models are mostly …

Lilo: Learning interpretable libraries by compressing and documenting code

G Grand, L Wong, M Bowers, TX Olausson… - arXiv preprint arXiv …, 2023 - arxiv.org
While large language models (LLMs) now excel at code generation, a key aspect of software
development is the art of refactoring: consolidating code into libraries of reusable and …