Branch-solve-merge improves large language model evaluation and generation
Large Language Models (LLMs) are frequently used for multi-faceted language generation
and evaluation tasks that involve satisfying intricate user constraints or taking into account …
and evaluation tasks that involve satisfying intricate user constraints or taking into account …
Large language models are not yet human-level evaluators for abstractive summarization
With the recent undeniable advancement in reasoning abilities in large language models
(LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks …
(LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks …
Adapt: As-needed decomposition and planning with language models
Large Language Models (LLMs) are increasingly being used for interactive decision-making
tasks requiring planning and adapting to the environment. Recent works employ LLMs-as …
tasks requiring planning and adapting to the environment. Recent works employ LLMs-as …
Iterated decomposition: Improving science q&a by supervising reasoning processes
J Reppert, B Rachbach, C George, L Stebbing… - arXiv preprint arXiv …, 2023 - arxiv.org
Language models (LMs) can perform complex reasoning either end-to-end, with hidden
latent state, or compositionally, with transparent intermediate state. Composition offers …
latent state, or compositionally, with transparent intermediate state. Composition offers …
FollowupQG: Towards information-seeking follow-up question generation
Humans ask follow-up questions driven by curiosity, which reflects a creative human
cognitive process. We introduce the task of real-world information-seeking follow-up …
cognitive process. We introduce the task of real-world information-seeking follow-up …
Explainmeetsum: A dataset for explainable meeting summarization aligned with human intent
To enhance the explainability of meeting summarization, we construct a new dataset called
“ExplainMeetSum,” an augmented version of QMSum, by newly annotating evidence …
“ExplainMeetSum,” an augmented version of QMSum, by newly annotating evidence …
Regal: Refactoring programs to discover generalizable abstractions
While large language models (LLMs) are increasingly being used for program synthesis,
they lack the global view needed to develop useful abstractions; they generally predict …
they lack the global view needed to develop useful abstractions; they generally predict …
Murmur: Modular multi-step reasoning for semi-structured data-to-text generation
Prompting large language models has enabled significant recent progress in multi-step
reasoning over text. However, when applied to text generation from semi-structured data …
reasoning over text. However, when applied to text generation from semi-structured data …
Best- Search Algorithm for Neural Text Generation
Modern natural language generation paradigms require a good decoding strategy to obtain
quality sequences out of the model. Beam search yields high-quality but low diversity …
quality sequences out of the model. Beam search yields high-quality but low diversity …
HyFit: Hybrid Fine-Tuning With Diverse Sampling for Abstractive Summarization
S Zhao, Y Cheng, Y Zhang, J Chen… - … Transactions on Big …, 2024 - ieeexplore.ieee.org
Abstractive summarization has made significant progress in recent years, which aims to
generate a concise and coherent summary that contains the most important facts from the …
generate a concise and coherent summary that contains the most important facts from the …