Automated repair of programs from large language models
Large language models such as Codex, have shown the capability to produce code for
many programming tasks. However, the success rate of existing models is low, especially for …
many programming tasks. However, the success rate of existing models is low, especially for …
Jigsaw: Large language models meet program synthesis
Large pre-trained language models such as GPT-3 [10], Codex [11], and Google's language
model [7] are now capable of generating code from natural language specifications of …
model [7] are now capable of generating code from natural language specifications of …
Discovering the syntax and strategies of natural language programming with generative language models
In this paper, we present a natural language code synthesis tool, GenLine, backed by 1) a
large generative language model and 2) a set of task-specific prompts that create or change …
large generative language model and 2) a set of task-specific prompts that create or change …
Interactive code generation via test-driven user-intent formalization
Large language models (LLMs) have shown great potential in automating significant
aspects of coding by producing natural code from informal natural language (NL) intent …
aspects of coding by producing natural code from informal natural language (NL) intent …
Repairing bugs in python assignments using large language models
Students often make mistakes on their introductory programming assignments as part of
their learning process. Unfortunately, providing custom repairs for these mistakes can …
their learning process. Unfortunately, providing custom repairs for these mistakes can …
Satlm: Satisfiability-aided language models using declarative prompting
Prior work has combined chain-of-thought prompting in large language models (LLMs) with
programmatic representations to perform effective and transparent reasoning. While such an …
programmatic representations to perform effective and transparent reasoning. While such an …
Using transfer learning for code-related tasks
Deep learning (DL) techniques have been used to support several code-related tasks such
as code summarization and bug-fixing. In particular, pre-trained transformer models are on …
as code summarization and bug-fixing. In particular, pre-trained transformer models are on …
Flashfill++: Scaling programming by example by cutting to the chase
Programming-by-Examples (PBE) involves synthesizing an" intended program" from a small
set of user-provided input-output examples. A key PBE strategy has been to restrict the …
set of user-provided input-output examples. A key PBE strategy has been to restrict the …
[PDF][PDF] De-hallucinator: Iterative grounding for llm-based code completion
Large languages models (LLMs) trained on datasets of publicly available source code have
established a new state-of-the-art in code completion. However, these models are mostly …
established a new state-of-the-art in code completion. However, these models are mostly …
Lilo: Learning interpretable libraries by compressing and documenting code
While large language models (LLMs) now excel at code generation, a key aspect of software
development is the art of refactoring: consolidating code into libraries of reusable and …
development is the art of refactoring: consolidating code into libraries of reusable and …