关注
Zhenghao Lin
Zhenghao Lin
在 stu.xmu.edu.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Annollm: Making large language models to be better crowdsourced annotators
X He, Z Lin, Y Gong, A Jin, H Zhang, C Lin, J Jiao, SM Yiu, N Duan, ...
arXiv preprint arXiv:2303.16854, 2023
1242023
Text generation with diffusion language models: A pre-training approach with continuous paragraph denoise
Z Lin, Y Gong, Y Shen, T Wu, Z Fan, C Lin, N Duan, W Chen
International Conference on Machine Learning, 21051-21064, 2023
57*2023
Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
S Fan, C Lin, H Li, Z Lin, J Su, H Zhang, Y Gong, J Guo, N Duan
EMNLP2022, 2022
222022
Prod: Progressive distillation for dense retrieval
Z Lin, Y Gong, X Liu, H Zhang, C Lin, A Dong, J Jiao, J Lu, D Jiang, ...
WWW2023, 2022
192022
Rho-1: Not all tokens are what you need
Z Lin, Z Gou, Y Gong, X Liu, Y Shen, R Xu, C Lin, Y Yang, J Jiao, N Duan, ...
arXiv preprint arXiv:2404.07965, 2024
172024
Competition-level problems are effective llm evaluators
Y Huang, Z Lin, X Liu, Y Gong, S Lu, F Lei, Y Liang, Y Shen, C Lin, ...
Findings of the Association for Computational Linguistics ACL 2024, 13526-13544, 2024
82024
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators.” arXiv
X He, Z Lin, Y Gong, AL Jin, H Zhang, C Lin, J Jiao, SM Yiu, N Duan, ...
52023
Ensuring Safe and High-Quality Outputs: A Guideline Library Approach for Language Models
Y Luo, Z Lin, Y Zhang, J Sun, C Lin, C Xu, X Su, Y Shen, J Guo, Y Gong
arXiv preprint arXiv:2403.11838, 2024
12024
Revolutionizing Database Q&A with Large Language Models: Comprehensive Benchmark and Evaluation
Y Zheng, B Li, Z Lin, Y Luo, X Zhou, C Lin, J Su, G Li, S Li
arXiv preprint arXiv:2409.04475, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–9