[HTML][HTML] Large-scale multi-modal pre-trained models: A comprehensive survey

X Wang, G Chen, G Qian, P Gao, XY Wei… - Machine Intelligence …, 2023 - Springer
With the urgent demand for generalized deep models, many pre-trained big models are
proposed, such as bidirectional encoder representations (BERT), vision transformer (ViT) …

Continual vision-language representation learning with off-diagonal information

Z Ni, L Wei, S Tang, Y Zhuang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Large-scale multi-modal contrastive learning frameworks like CLIP typically require a large
amount of image-text samples for training. However, these samples are always collected …

[HTML][HTML] Structure-inducing pre-training

MBA McDermott, B Yap, P Szolovits… - Nature Machine …, 2023 - nature.com
Abstract Language model pre-training and the derived general-purpose methods have
reshaped machine learning research. However, there remains considerable uncertainty …

E-CGL: An Efficient Continual Graph Learner

J Guo, Z Ni, Y Zhu, S Tang - arXiv preprint arXiv:2408.09350, 2024 - arxiv.org
Continual learning has emerged as a crucial paradigm for learning from sequential data
while preserving previous knowledge. In the realm of continual graph learning, where …