Dreamllm: Synergistic multimodal comprehension and creation

R Dong, C Han, Y Peng, Z Qi, Z Ge, J Yang… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper presents DreamLLM, a learning framework that first achieves versatile
Multimodal Large Language Models (MLLMs) empowered with frequently overlooked …

Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration

C Lyu, M Wu, L Wang, X Huang, B Liu, Z Du… - arXiv preprint arXiv …, 2023 - arxiv.org
Although instruction-tuned large language models (LLMs) have exhibited remarkable
capabilities across various NLP tasks, their effectiveness on other data modalities beyond …

Minigpt-5: Interleaved vision-and-language generation via generative vokens

K Zheng, X He, XE Wang - arXiv preprint arXiv:2310.02239, 2023 - arxiv.org
Large Language Models (LLMs) have garnered significant attention for their advancements
in natural language processing, demonstrating unparalleled prowess in text comprehension …

Mmdialog: A large-scale multi-turn dialogue dataset towards multi-modal open-domain conversation

J Feng, Q Sun, C Xu, P Zhao, Y Yang, C Tao… - arXiv preprint arXiv …, 2022 - arxiv.org
Responding with multi-modal content has been recognized as an essential capability for an
intelligent conversational agent. In this paper, we introduce the MMDialog dataset to better …

Multimodal federated learning: Concept, methods, applications and future directions

W Huang, D Wang, X Ouyang, J Wan, J Liu, T Li - Information Fusion, 2024 - Elsevier
Multimodal learning mines and analyzes multimodal data in reality to better understand and
appreciate the world around people. However, how to exploit this rich multimodal data …

EasyGen: Easing Multimodal Generation with BiDiffuser and LLMs

X Zhao, B Liu, Q Liu, G Shi, XM Wu - Proceedings of the 62nd …, 2024 - aclanthology.org
We present EasyGen, an efficient model designed to enhance multimodal understanding
and generation by harnessing the capabilities of diffusion models and large language …

Pace: Unified multi-modal dialogue pre-training with progressive and compositional experts

Y Li, B Hui, ZC Yin, M Yang, F Huang, Y Li - arXiv preprint arXiv …, 2023 - arxiv.org
Perceiving multi-modal information and fulfilling dialogues with humans is a long-term goal
of artificial intelligence. Pre-training is commonly regarded as an effective approach for multi …

DialogCC: An Automated Pipeline for Creating High-Quality Multi-Modal Dialogue Dataset

YJ Lee, B Ko, HG Kim, J Hyeon… - Proceedings of the 2024 …, 2024 - aclanthology.org
As sharing images in an instant message is a crucial factor, there has been active research
on learning an image-text multi-modal dialogue models. However, training a well …

Response generation in multi-modal dialogues with split pre-generation and cross-modal contrasting

L Li, D Zhang, S Zhu, S Li, G Zhou - Information Processing & Management, 2024 - Elsevier
Due to the natural multi-modal occurrence format (text, audio, vision) of the dialogues,
textual response generation in dialogues should rely on the multi-modal contexts beyond …

Knowprefix-tuning: A two-stage prefix-tuning framework for knowledge-grounded dialogue generation

J Bai, Z Yan, Z Yang, J Yang, X Liang, H Guo… - … European Conference on …, 2023 - Springer
Existing knowledge-grounded conversation systems generate responses typically in a
retrieve-then-generate manner. They require a large knowledge base and a strong …