Interactive natural language processing

Z Wang, G Zhang, K Yang, N Shi, W Zhou… - arXiv preprint arXiv …, 2023 - arxiv.org
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within
the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …

Full parameter fine-tuning for large language models with limited resources

K Lv, Y Yang, T Liu, Q Gao, Q Guo, X Qiu - arXiv preprint arXiv:2306.09782, 2023 - arxiv.org
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP)
but demand massive GPU resources for training. Lowering the threshold for LLMs training …

Parameter-efficient fine-tuning design spaces

J Chen, A Zhang, X Shi, M Li, A Smola… - arXiv preprint arXiv …, 2023 - arxiv.org
Parameter-efficient fine-tuning aims to achieve performance comparable to fine-tuning,
using fewer trainable parameters. Several strategies (eg, Adapters, prefix tuning, BitFit, and …

Hypertuning: Toward adapting large language models without back-propagation

J Phang, Y Mao, P He, W Chen - … Conference on Machine …, 2023 - proceedings.mlr.press
Fine-tuning large language models for different tasks can be costly and inefficient, and even
methods that reduce the number of tuned parameters still require full gradient-based …

Enhancing Large Language Model-based Speech Recognition by Contextualization for Rare and Ambiguous Words

K Nozawa, T Masuko, T Taniguchi - arXiv preprint arXiv:2408.08027, 2024 - arxiv.org
We develop a large language model (LLM) based automatic speech recognition (ASR)
system that can be contextualized by providing keywords as prior information in text …

[PDF][PDF] Team text-understanding-and-analysi at PAN: Utilizing BERT Series Pretraining Model for Multi-Author Writing Style Analysis

Y Huang, L Kong - Working Notes of CLEF, 2024 - ceur-ws.org
We propose a training model based on BERT series. This method uses sliding window
technique to preprocess data sets to train and solve multi-author writing style analysis tasks …

DimA: A Parameter-efficient Fine-tuning Method with Knowledge Transfer Based on Transformer

W Zhang, M Huang, Z Song, Q Miao - Proceedings of the 2024 …, 2024 - aclanthology.org
Fine-tuning is a widely used technique for leveraging pre-trained language models (PLMs)
in downstream tasks, but it can be computationally expensive and storage-intensive. To …

Incremental Unified Parameter Additional Tuning with Basic Memory Replaying

J Deng, J Hu, H Zhang, Y Wang - openreview.net
Class incremental learning (CIL) aims to develop an open intelligence system that can
continuously learn new concepts from new tasks while retaining the knowledge to …

ESEAD: An Enhanced Simple Ensemble and Distillation Framework for Natural Language Processing

M Mei - openreview.net
Large-scale pre-trained language models (PLM) are today's leading technology for a wide
range of natural language processing tasks. However, the enormous size of these models …