关注
Yijia Shao
Yijia Shao
Stanford University
在 pku.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Continual Pre-training of Language Models
Z Ke, Y Shao, H Lin, T Konishi, G Kim, B Liu
The Eleventh International Conference on Learning Representations (ICLR 2023), 2023
852023
Continual training of language models for few-shot learning
Z Ke, H Lin, Y Shao, H Xu, L Shu, B Liu
arXiv preprint arXiv:2210.05549, 2022
272022
Adapting a language model while preserving its general knowledge
Z Ke, Y Shao, H Lin, H Xu, L Shu, B Liu
arXiv preprint arXiv:2301.08986, 2023
142023
Assisting in writing wikipedia-like articles from scratch with large language models
Y Shao, Y Jiang, TA Kanell, P Xu, O Khattab, MS Lam
arXiv preprint arXiv:2402.14207, 2024
112024
LUNA: language understanding with number augmentations on transformers via number plugins and pre-training
H Han, J Xu, M Zhou, Y Shao, S Han, D Zhang
arXiv preprint arXiv:2212.02691, 2022
112022
Quiet-star: Language models can teach themselves to think before speaking
E Zelikman, G Harik, Y Shao, V Jayasiri, N Haber, ND Goodman
arXiv preprint arXiv:2403.09629, 2024
102024
Class-incremental learning based on label generation
Y Shao, Y Guo, D Zhao, B Liu
arXiv preprint arXiv:2306.12619, 2023
92023
Cmg: A class-mixed generation approach to out-of-distribution detection
M Wang, Y Shao, H Lin, W Hu, B Liu
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2022
72022
Class incremental learning via likelihood ratio based task prediction
H Lin, Y Shao, W Qian, N Pan, Y Guo, B Liu
arXiv preprint arXiv:2309.15048, 2023
42023
Accent: An automatic event commonsense evaluation metric for open-domain dialogue systems
S Ghazarian, Y Shao, R Han, A Galstyan, N Peng
arXiv preprint arXiv:2305.07797, 2023
42023
Anameta: A table understanding dataset of field metadata knowledge shared by multi-dimensional data analysis tasks
X He, M Zhou, M Zhou, J Xu, X Lv, T Li, Y Shao, S Han, Z Yuan, D Zhang
arXiv preprint arXiv:2209.00946, 2022
32022
Show, Don't Tell: Aligning Language Models with Demonstrated Feedback
O Shaikh, M Lam, J Hejna, Y Shao, M Bernstein, D Yang
arXiv preprint arXiv:2406.00888, 2024
22024
FormLM: Recommending Creation Ideas for Online Forms by Modelling Semantic and Structural Information
Y Shao, M Zhou, Y Zhong, T Wu, H Han, S Han, G Huang, D Zhang
arXiv preprint arXiv:2211.05284, 2022
12022
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action
Y Shao, T Li, W Shi, Y Liu, D Yang
arXiv preprint arXiv:2409.00138, 2024
2024
Into the Unknown Unknowns: Engaged Human Learning through Participation in Language Model Agent Conversations
Y Jiang, Y Shao, D Ma, SJ Semnani, MS Lam
arXiv preprint arXiv:2408.15232, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–15