关注
SongYang Gao
SongYang Gao
在 m.fudan.edu.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Secrets of rlhf in large language models part i: Ppo
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ...
arXiv preprint arXiv:2307.04964, 2023
532023
Secrets of rlhf in large language models part ii: Reward modeling
B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ...
arXiv preprint arXiv:2401.06080, 2024
272024
Self-polish: Enhance reasoning in large language models via problem refinement
Z Xi, S Jin, Y Zhou, R Zheng, S Gao, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2305.14497, 2023
202023
Zhiheng Xi
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ...
Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng …, 2023
202023
Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment
S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou, Z Xi, X Wang, ...
arXiv preprint arXiv:2312.09979 4 (7), 2023
82023
Zhiheng Xi
B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ...
Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi …, 2024
72024
Zhiheng Xi, Rui Zheng, Yicheng Zou, Tao Gui, et al. 2023b. Trace: A comprehensive benchmark for continual learning in large language models
X Wang, Y Zhang, T Chen, S Gao, S Jin, X Yang
arXiv preprint arXiv:2310.06762, 0
7
Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023
S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou
Loramoe: Revolutionizing mixture of experts for maintaining world knowledge …, 0
7
Zhiheng Xi, Xiao Wang, Xiaoran Fan, et al. 2023. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment
S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou
arXiv preprint arXiv:2312.09979, 2023
62023
Kernel-whitening: Overcome dataset bias with isotropic sentence embedding
S Gao, S Dou, Q Zhang, X Huang
arXiv preprint arXiv:2210.07547, 2022
62022
Decorrelate irrelevant, purify relevant: Overcome textual spurious correlations from a feature perspective
S Dou, R Zheng, T Wu, S Gao, J Shan, Q Zhang, Y Wu, X Huang
arXiv preprint arXiv:2202.08048, 2022
62022
Chinese tiny llm: Pretraining a chinese-centric large language model
X Du, Z Yu, S Gao, D Pan, Y Cheng, Z Ma, R Yuan, X Qu, J Liu, T Zheng, ...
arXiv preprint arXiv:2404.04167, 2024
52024
Navigating the overkill in large language models
C Shi, X Wang, Q Ge, S Gao, X Yang, T Gui, Q Zhang, X Huang, X Zhao, ...
arXiv preprint arXiv:2401.17633, 2024
52024
Farewell to aimless large-scale pretraining: Influential subset selection for language model
X Wang, W Zhou, Q Zhang, J Zhou, S Gao, J Wang, M Zhang, X Gao, ...
arXiv preprint arXiv:2305.12816, 2023
52023
EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
W Zhou, X Wang, L Xiong, H Xia, Y Gu, M Chai, F Zhu, C Huang, S Dou, ...
arXiv preprint arXiv:2403.12171, 2024
42024
Map-neo: Highly capable and transparent bilingual large language model series
G Zhang, S Qu, J Liu, C Zhang, C Lin, CL Yu, D Pan, E Cheng, J Liu, ...
arXiv preprint arXiv:2405.19327, 2024
32024
Tooleyes: Fine-grained evaluation for tool learning capabilities of large language models in real-world scenarios
J Ye, G Li, S Gao, C Huang, Y Wu, S Li, X Fan, S Dou, Q Zhang, T Gui, ...
arXiv preprint arXiv:2401.00741, 2024
32024
DSRM: Boost textual adversarial training with distribution shift risk minimization
S Gao, S Dou, Y Liu, X Wang, Q Zhang, Z Wei, J Ma, Y Shan
arXiv preprint arXiv:2306.15164, 2023
32023
Toolsword: Unveiling safety issues of large language models in tool learning across three stages
J Ye, S Li, G Li, C Huang, S Gao, Y Wu, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2402.10753, 2024
22024
RoTBench: A Multi-Level Benchmark for Evaluating the Robustness of Large Language Models in Tool Learning
J Ye, Y Wu, S Gao, S Li, G Li, X Fan, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2401.08326, 2024
22024
系统目前无法执行此操作,请稍后再试。
文章 1–20