Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations

X Jin, X Ren - arXiv preprint arXiv:2406.14026, 2024 - arxiv.org
Language models (LMs) are known to suffer from forgetting of previously learned examples
when fine-tuned, breaking stability of deployed LM systems. Despite efforts on mitigating …

Demystifying Language Model Forgetting with Low-Rank Example Associations

X Jin, X Ren - NeurIPS 2024 Workshop on Scalable Continual … - openreview.net
Large Language models (LLMs) suffer from forgetting of upstream data when fine-tuned.
Despite efforts on mitigating forgetting, few have investigated whether, and how forgotten …