Fine-tuning aligned language models compromises safety, even when users do not intend to!

X Qi, Y Zeng, T Xie, PY Chen, R Jia, P Mittal… - arXiv preprint arXiv …, 2023 - arxiv.org
Optimizing large language models (LLMs) for downstream use cases often involves the
customization of pre-trained LLMs through further fine-tuning. Meta's open release of Llama …

Tptu: Task planning and tool usage of large language model-based ai agents

J Ruan, Y Chen, B Zhang, Z Xu, T Bao… - … Models for Decision …, 2023 - openreview.net
With recent advancements in natural language processing, Large Language Models (LLMs)
have emerged as powerful tools for various real-world applications. Despite their prowess …

A review of current trends, techniques, and challenges in large language models (llms)

R Patil, V Gudivada - Applied Sciences, 2024 - mdpi.com
Natural language processing (NLP) has significantly transformed in the last decade,
especially in the field of language modeling. Large language models (LLMs) have achieved …

Evaluating instruction-tuned large language models on code comprehension and generation

Z Yuan, J Liu, Q Zi, M Liu, X Peng, Y Lou - arXiv preprint arXiv:2308.01240, 2023 - arxiv.org
In this work, we evaluate 10 open-source instructed LLMs on four representative code
comprehension and generation tasks. We have the following main findings. First, for the zero …

A Survey on Stability of Learning with Limited Labelled Data and its Sensitivity to the Effects of Randomness

B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024 - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning or few-shot learning, aims to effectively train a model using only a small amount of …

Enhancing conversational search: Large language model-aided informative query rewriting

F Ye, M Fang, S Li, E Yilmaz - arXiv preprint arXiv:2310.09716, 2023 - arxiv.org
Query rewriting plays a vital role in enhancing conversational search by transforming context-
dependent user queries into standalone forms. Existing approaches primarily leverage …

Llmparser: An exploratory study on using large language models for log parsing

Z Ma, AR Chen, DJ Kim, TH Chen, S Wang - Proceedings of the IEEE …, 2024 - dl.acm.org
Logs are important in modern software development with runtime information. Log parsing is
the first step in many log-based analyses, that involve extracting structured information from …

Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning

L Weber, E Bruni, D Hupkes - arXiv preprint arXiv:2310.13486, 2023 - arxiv.org
Finding the best way of adapting pre-trained language models to a task is a big challenge in
current NLP. Just like the previous generation of task-tuned models (TT), models that are …

Measuring and controlling persona drift in language model dialogs

K Li, T Liu, N Bashkansky, D Bau, F Viégas… - arXiv preprint arXiv …, 2024 - arxiv.org
Prompting is a standard tool for customizing language-model chatbots, enabling them to
take on a specific" persona". An implicit assumption in the use of prompts is that they will be …

In-context learning with long-context models: An in-depth exploration

A Bertsch, M Ivgi, U Alon, J Berant, MR Gormley… - arXiv preprint arXiv …, 2024 - arxiv.org
As model context lengths continue to increase, the number of demonstrations that can be
provided in-context approaches the size of entire training datasets. We study the behavior of …