Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 2331 | 2023 |
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction Y Luan, L He, M Ostendorf, H Hajishirzi Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2018, 2018 | 817 | 2018 |
Entity, relation, and event extraction with contextualized span representations D Wadden, U Wennberg, Y Luan, H Hajishirzi Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2019., 2019 | 725 | 2019 |
Sparse, Dense, and Attentional Representations for Text Retrieval Y Luan, J Eisenstein, K Toutanova, M Collins Transactions of the Association for Computational Linguistics 9, 329-345, 2021 | 416 | 2021 |
A general framework for information extraction using dynamic span graphs Y Luan, D Wadden, L He, A Shah, M Ostendorf, H Hajishirzi Proc. Conf. North American Assoc. for Computational Linguistics (NAACL), 2019., 2019 | 403 | 2019 |
Text generation from knowledge graphs with graph transformers R Koncel-Kedziorski, D Bekal, Y Luan, M Lapata, H Hajishirzi Proc. Conf. North American Assoc. for Computational Linguistics (NAACL), 2019, 2019 | 394 | 2019 |
Large Dual Encoders Are Generalizable Retrievers J Ni, C Qu, J Lu, Z Dai, GH Ábrego, J Ma, VY Zhao, Y Luan, KB Hall, ... arXiv preprint arXiv:2112.07899, 2021 | 360 | 2021 |
Promptagator: Few-shot Dense Retrieval From 8 Examples Z Dai, VY Zhao, J Ma, Y Luan, J Ni, J Lu, A Bakalov, K Guu, KB Hall, ... arXiv preprint arXiv:2209.11755, 2022 | 203 | 2022 |
Instruction-following evaluation for large language models J Zhou, T Lu, S Mishra, S Brahma, S Basu, Y Luan, D Zhou, L Hou arXiv preprint arXiv:2311.07911, 2023 | 159 | 2023 |
ASQA: Factoid Questions Meet Long-Form Answers I Stelmakh, Y Luan, B Dhingra, MW Chang arXiv preprint arXiv:2204.06092, 2022 | 130 | 2022 |
Scientific information extraction with semi-supervised neural tagging Y Luan, M Ostendorf, H Hajishirzi Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2017., 2017 | 110 | 2017 |
Multi-task learning for speaker-role adaptation in neural conversation models Y Luan, C Brockett, B Dolan, J Gao, M Galley Proc. Joint Conference on Natural Language Processing (IJCNLP), 2017., 2017 | 96 | 2017 |
LSTM based conversation models Y Luan, Y Ji, M Ostendorf Proc. Int. Workshop on Conversational Natural Language Processing (ConvNLP …, 2016 | 69 | 2016 |
CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning Z Wu, Y Luan, H Rashkin, D Reitter, GS Tomar arXiv preprint arXiv:2112.08558, 2021 | 64 | 2021 |
Paperrobot: Incremental draft generation of scientific ideas Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan Proc. Annu. Meeting Assoc. for Computational Linguistics (ACL), 2019., 2019 | 63 | 2019 |
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? Y Chen, H Hu, Y Luan, H Sun, S Changpinyo, A Ritter, MW Chang arXiv preprint arXiv:2302.11713, 2023 | 61 | 2023 |
Gecko: Versatile Text Embeddings Distilled from Large Language Models J Lee, Z Dai, X Ren, B Chen, D Cer, JR Cole, K Hui, M Boratko, ... arXiv preprint arXiv:2403.20327, 2024 | 57 | 2024 |
Method for using a multi-scale recurrent neural network with pretraining for spoken language understanding tasks S Watanabe, Y Luan, B Harsham US Patent 9,607,616, 2017 | 56 | 2017 |
Open-domain Visual Entity Recognition: Towards Recognizing Millions of Wikipedia Entities H Hu, Y Luan, Y Chen, U Khandelwal, M Joshi, K Lee, K Toutanova, ... arXiv preprint arXiv:2302.11154, 2023 | 46 | 2023 |
Contextualized Representations Using Textual Encyclopedic Knowledge M Joshi, K Lee, Y Luan, K Toutanova arXiv preprint arXiv:2004.12006, 2020 | 30 | 2020 |