Connecting large language models with evolutionary algorithms yields powerful prompt optimizers
Large Language Models (LLMs) excel in various tasks, but they rely on carefully crafted
prompts that often demand substantial human effort. To automate this process, in this paper …
prompts that often demand substantial human effort. To automate this process, in this paper …
Video in-context learning
In-context learning for vision data has been underexplored compared with that in natural
language. Previous works studied image in-context learning, urging models to generate a …
language. Previous works studied image in-context learning, urging models to generate a …
Assurance of AI systems from a dependability perspective
R Bloomfield, J Rushby - arXiv preprint arXiv:2407.13948, 2024 - arxiv.org
We outline the principles of classical assurance for computer-based systems that pose
significant risks. We then consider application of these principles to systems that employ …
significant risks. We then consider application of these principles to systems that employ …
Investigating the Effects of Dialogue Summarization on Intervention in Human-System Collaborative Dialogue
S Yamashita, S Mochizuki, K Kawasaki… - Proceedings of the 11th …, 2023 - dl.acm.org
Dialogue systems are widely utilized in chatbots and call centers. However, it is often difficult
for such systems to deliver fully autonomous dialogue. For users to have a better dialogue …
for such systems to deliver fully autonomous dialogue. For users to have a better dialogue …
TasTe: Teaching Large Language Models to Translate through Self-Reflection
Large language models (LLMs) have exhibited remarkable performance in various natural
language processing tasks. Techniques like instruction tuning have effectively enhanced the …
language processing tasks. Techniques like instruction tuning have effectively enhanced the …
Rethinking Semantic Parsing for Large Language Models: Enhancing LLM Performance with Semantic Hints
Semantic Parsing aims to capture the meaning of a sentence and convert it into a logical,
structured form. Previous studies show that semantic parsing enhances the performance of …
structured form. Previous studies show that semantic parsing enhances the performance of …
Hint Marginalization for Improved Reasoning in Large Language Models
Large Language Models (LLMs) have exhibited an impressive capability to perform
reasoning tasks, especially if they are encouraged to generate a sequence of intermediate …
reasoning tasks, especially if they are encouraged to generate a sequence of intermediate …
Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning
T Zhang, B Peng, D Bollegala - arXiv preprint arXiv:2404.16807, 2024 - arxiv.org
Generative Commonsense Reasoning (GCR) requires a model to reason about a situation
using commonsense knowledge, while generating coherent sentences. Although the quality …
using commonsense knowledge, while generating coherent sentences. Although the quality …
AutoFeedback: An LLM-based Framework for Efficient and Accurate API Request Generation
H Liu, J Liao, D Feng, K Xu, H Wang - arXiv preprint arXiv:2410.06943, 2024 - arxiv.org
Large Language Models (LLMs) leverage external tools primarily through generating the
API request to enhance task completion efficiency. The accuracy of API request generation …
API request to enhance task completion efficiency. The accuracy of API request generation …
Mitigating Knowledge Conflicts in Data-to-Text Generation via the Internalization of Fact Extraction
Large Language Models (LLMs) have made remarkable advancements in Natural
Language Generation. Nonetheless, LLMs are prone to encountering knowledge conflicts …
Language Generation. Nonetheless, LLMs are prone to encountering knowledge conflicts …