关注
Hu Xiangfei
Hu Xiangfei
Intern at Shanghai AI Lab
在 pjlab.org.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Llama-adapter: Efficient fine-tuning of language models with zero-init attention
R Zhang, J Han, C Liu, P Gao, A Zhou, X Hu, S Yan, P Lu, H Li, Y Qiao
arXiv preprint arXiv:2303.16199, 2023
4282023
Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners
R Zhang, X Hu, B Li, S Huang, H Deng, Y Qiao, P Gao, H Li
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
1032023
系统目前无法执行此操作,请稍后再试。
文章 1–2