关注
Junyi Li
Junyi Li
Ph.D. student, Universite de Montreal, Renmin University of China
在 umontreal.ca 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
A survey of large language models
WX Zhao, K Zhou, J Li, T Tang, X Wang, Y Hou, Y Min, B Zhang, J Zhang, ...
arXiv preprint arXiv:2303.18223, 2023
2009*2023
Pre-trained language models for text generation: A survey
J Li, T Tang, WX Zhao, JY Nie, JR Wen
ACM Computing Surveys 56 (9), 1-39, 2024
270*2024
A survey of vision-language pre-trained models
Y Du, Z Liu, J Li, WX Zhao
arXiv preprint arXiv:2202.10936, 2022
1612022
Halueval: A large-scale hallucination evaluation benchmark for large language models
J Li, X Cheng, WX Zhao, JY Nie, JR Wen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
160*2023
WenLan: Bridging vision and language by large-scale multi-modal pre-training
Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, ...
arXiv preprint arXiv:2103.06561, 2021
1192021
Few-shot knowledge graph-to-text generation with pretrained language models
J Li, T Tang, WX Zhao, Z Wei, NJ Yuan, JR Wen
Findings of The 59th Annual Meeting of the Association for Computational …, 2021
492021
Mining implicit entity preference from user-item interaction data for knowledge graph completion via adversarial learning
G He, J Li, WX Zhao, P Liu, JR Wen
Proceedings of the Web Conference 2020, 740-751, 2020
412020
A survey on long text modeling with transformers
Z Dong, T Tang, L Li, WX Zhao
arXiv preprint arXiv:2302.14502, 2023
352023
Generating long and informative reviews with aspect-aware coarse-to-fine decoding
J Li, WX Zhao, JR Wen, Y Song
The 57th Annual Meeting of the Association for Computational Linguistics (ACL), 2019
352019
Knowledge-enhanced personalized review generation with capsule graph neural network
J Li, S Li, WX Zhao, G He, Z Wei, NJ Yuan, JR Wen
Proceedings of the 29th ACM International Conference on Information …, 2020
342020
Textbox 2.0: A text generation library with pre-trained language models
T Tang, J Li, Z Chen, Y Hu, Z Yu, W Dai, Z Dong, X Cheng, Y Wang, ...
arXiv preprint arXiv:2212.13005, 2022
33*2022
Learning to Transfer Prompts for Text Generation
J Li, T Tang, JY Nie, JR Wen, WX Zhao
NAACL 2022, 2022
292022
Mvp: Multi-task supervised pre-training for natural language generation
T Tang, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2206.12131, 2022
272022
Context-tuning: Learning contextualized prompts for natural language generation
T Tang, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2201.08670, 2022
222022
Knowledge-based review generation by coherence enhanced text planning
J Li, WX Zhao, Z Wei, NJ Yuan, JR Wen
The 44th International ACM SIGIR Conference on Research and Development in …, 2021
222021
The dawn after the dark: An empirical study on factuality hallucination in large language models
J Li, J Chen, R Ren, X Cheng, WX Zhao, JY Nie, JR Wen
arXiv preprint arXiv:2401.03205, 2024
172024
ELMER: A non-autoregressive pre-trained language model for efficient and effective text generation
J Li, T Tang, WX Zhao, JY Nie, JR Wen
arXiv preprint arXiv:2210.13304, 2022
132022
Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models
Z Dong, T Tang, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2309.13345, 2023
112023
The Web Can Be Your Oyster for Improving Large Language Models
J Li, T Tang, WX Zhao, J Wang, JY Nie, JR Wen
arXiv preprint arXiv:2305.10998, 2023
8*2023
Learning to imagine: Visually-augmented natural language generation
T Tang, Y Chen, Y Du, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2305.16944, 2023
72023
系统目前无法执行此操作,请稍后再试。
文章 1–20