mplug-owl: Modularization empowers large language models with multimodality Q Ye, H Xu, G Xu, J Ye, M Yan, Y Zhou, J Wang, A Hu, P Shi, Y Shi, C Li, ... arXiv preprint arXiv:2304.14178, 2023 | 541 | 2023 |
mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration Q Ye, H Xu, J Ye, M Yan, A Hu, H Liu, Q Qian, J Zhang, F Huang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 133 | 2024 |
WenLan: Bridging vision and language by large-scale multi-modal pre-training Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, ... arXiv preprint arXiv:2103.06561, 2021 | 125 | 2021 |
mplug-docowl: Modularized multimodal large language model for document understanding J Ye, A Hu, H Xu, Q Ye, M Yan, Y Dan, C Zhao, G Xu, C Li, J Tian, Q Qi, ... arXiv preprint arXiv:2307.02499, 2023 | 65 | 2023 |
Ureader: Universal ocr-free visually-situated language understanding with multimodal large language model J Ye, A Hu, H Xu, Q Ye, M Yan, G Xu, C Li, J Tian, Q Qian, J Zhang, Q Jin, ... arXiv preprint arXiv:2310.05126, 2023 | 56 | 2023 |
Leveraging multi-token entities in document-level named entity recognition A Hu, Z Dou, JY Nie, JR Wen Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), 7961-7968, 2020 | 29 | 2020 |
mplug-docowl 1.5: Unified structure learning for ocr-free document understanding A Hu, H Xu, J Ye, M Yan, L Zhang, B Zhang, C Li, J Zhang, Q Jin, F Huang, ... arXiv preprint arXiv:2403.12895, 2024 | 23 | 2024 |
A roadmap for big model S Yuan, H Zhao, S Zhao, J Leng, Y Liang, X Wang, J Yu, X Lv, Z Shao, ... arXiv preprint arXiv:2203.14101, 2022 | 20 | 2022 |
Icecap: Information concentrated entity-aware image captioning A Hu, S Chen, Q Jin Proceedings of the 28th ACM International Conference on Multimedia, 4217-4225, 2020 | 19 | 2020 |
mplug-paperowl: Scientific diagram analysis with the multimodal large language model A Hu, Y Shi, H Xu, J Ye, Q Ye, M Yan, C Li, Q Qian, J Zhang, F Huang ACM Multimedia 2024, 2023 | 18 | 2023 |
Question-controlled text-aware image captioning A Hu, S Chen, Q Jin Proceedings of the 29th ACM International Conference on Multimedia, 3097-3105, 2021 | 15 | 2021 |
Youku-mplug: A 10 million large-scale chinese video-language dataset for pre-training and benchmarks H Xu, Q Ye, X Wu, M Yan, Y Miao, J Ye, G Xu, A Hu, Y Shi, G Xu, C Li, ... arXiv preprint arXiv:2306.04362, 2023 | 12 | 2023 |
Infometic: An informative metric for reference-free image caption evaluation A Hu, S Chen, L Zhang, Q Jin arXiv preprint arXiv:2305.06002, 2023 | 8 | 2023 |
Movie101: A new movie understanding benchmark Z Yue, Q Zhang, A Hu, L Zhang, Z Wang, Q Jin arXiv preprint arXiv:2305.12140, 2023 | 7 | 2023 |
Tinychart: Efficient chart understanding with visual token merging and program-of-thoughts learning L Zhang, A Hu, H Xu, M Yan, Y Xu, Q Jin, J Zhang, F Huang arXiv preprint arXiv:2404.16635, 2024 | 6 | 2024 |
Multimodal pretraining from monolingual to multilingual L Zhang, L Ruan, A Hu, Q Jin Machine Intelligence Research 20 (2), 220-232, 2023 | 6 | 2023 |
Accommodating audio modality in CLIP for multimodal processing L Ruan, A Hu, Y Song, L Zhang, S Zheng, Q Jin Proceedings of the AAAI Conference on Artificial Intelligence 37 (8), 9641-9649, 2023 | 5 | 2023 |
MPMQA: multimodal question answering on product manuals L Zhang, A Hu, J Zhang, S Hu, Q Jin Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 13958 …, 2023 | 4 | 2023 |
Generalizing multimodal pre-training into multilingual via language acquisition L Zhang, A Hu, Q Jin arXiv preprint arXiv:2206.11091, 2022 | 4 | 2022 |
Multi-lingual acquisition on multimodal pre-training for cross-modal retrieval L Zhang, A Hu, Q Jin Advances in Neural Information Processing Systems 35, 29691-29704, 2022 | 3 | 2022 |