Recyclable tuning for continual pre-training

Y Qin, C Qian, X Han, Y Lin, H Wang, R Xie… - arXiv preprint arXiv …, 2023 - arxiv.org
Continual pre-training is the paradigm where pre-trained language models (PLMs)
continually acquire fresh knowledge from growing data and gradually get upgraded. Before …

[PDF][PDF] Pre-trained Models for Representation Learning

Y Lin, N Ding, Z Liu, M Sun - Representation Learning for Natural …, 2023 - library.oapen.org
Pre-training-fine-tuning has recently become a new paradigm in natural language
processing, learning better representations of words, sentences, and documents in a self …

Exploring Universal Intrinsic Task Subspace for Few-shot Learning via Prompt Tuning

Y Qin, X Wang, Y Su, Y Lin, N Ding, J Yi… - … on Audio, Speech …, 2024 - ieeexplore.ieee.org
Why can pre-trained language models (PLMs) learn universal representations and
effectively adapt to broad NLP tasks differing a lot superficially? In this work, we empirically …