A comprehensive survey on pretrained foundation models: A history from bert to chatgpt
Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks with different data modalities. A PFM (eg, BERT, ChatGPT, and GPT-4) is …
downstream tasks with different data modalities. A PFM (eg, BERT, ChatGPT, and GPT-4) is …
A comprehensive survey of continual learning: theory, method and application
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …
A survey of large language models
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
Parameter-efficient fine-tuning of large-scale pre-trained language models
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …
paradigm, it has been continuously shown that larger models tend to yield better …
Zhongjing: Enhancing the chinese medical capabilities of large language model through expert feedback and real-world multi-turn dialogue
Abstract Recent advances in Large Language Models (LLMs) have achieved remarkable
breakthroughs in understanding and responding to user intents. However, their performance …
breakthroughs in understanding and responding to user intents. However, their performance …
[HTML][HTML] Pre-trained language models and their applications
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …
A systematic evaluation of large language models of code
Large language models (LMs) of code have recently shown tremendous promise in
completing code and synthesizing code from natural language descriptions. However, the …
completing code and synthesizing code from natural language descriptions. However, the …
Transformers in time series: A survey
Transformers have achieved superior performances in many tasks in natural language
processing and computer vision, which also triggered great interest in the time series …
processing and computer vision, which also triggered great interest in the time series …
Enhancing chat language models by scaling high-quality instructional conversations
Fine-tuning on instruction data has been widely validated as an effective practice for
implementing chat language models like ChatGPT. Scaling the diversity and quality of such …
implementing chat language models like ChatGPT. Scaling the diversity and quality of such …
Deep class-incremental learning: A survey
Deep models, eg, CNNs and Vision Transformers, have achieved impressive achievements
in many vision tasks in the closed world. However, novel classes emerge from time to time in …
in many vision tasks in the closed world. However, novel classes emerge from time to time in …