Recent advances in natural language inference: A survey of benchmarks, resources, and approaches S Storks, Q Gao, JY Chai arXiv preprint arXiv:1904.01172, 2019 | 101 | 2019 |
Language to Action: Towards Interactive Task Learning with Physical Agents. JY Chai, Q Gao, L She, S Yang, S Saba-Sadiya, G Xu IJCAI 7, 2-9, 2018 | 97 | 2018 |
Commonsense reasoning for natural language understanding: A survey of benchmarks, resources, and approaches S Storks, Q Gao, JY Chai arXiv preprint arXiv:1904.01172, 1-60, 2019 | 80 | 2019 |
Embodied bert: A transformer model for embodied, language-guided visual task completion A Suglia, Q Gao, J Thomason, G Thattai, G Sukhatme arXiv preprint arXiv:2108.04927, 2021 | 68 | 2021 |
Dialfred: Dialogue-enabled agents for embodied instruction following X Gao, Q Gao, R Gong, K Lin, G Thattai, GS Sukhatme IEEE Robotics and Automation Letters 7 (4), 10049-10056, 2022 | 57 | 2022 |
Grounded semantic role labeling S Yang, Q Gao, C Liu, C Xiong, SC Zhu, J Chai Proceedings of the 2016 Conference of the North American Chapter of the …, 2016 | 52 | 2016 |
Physical causality of action verbs in grounded language understanding Q Gao, M Doering, S Yang, J Chai Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016 | 51 | 2016 |
What action causes this? towards naive physical action-effect prediction Q Gao, S Yang, J Chai, L Vanderwende Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018 | 34 | 2018 |
Tiered reasoning for intuitive physics: Toward verifiable commonsense language understanding S Storks, Q Gao, Y Zhang, J Chai arXiv preprint arXiv:2109.04947, 2021 | 25 | 2021 |
Towards large-scale interpretable knowledge graph reasoning for dialogue systems YL Tuan, S Beygi, M Fazel-Zarandi, Q Gao, A Cervone, WY Wang arXiv preprint arXiv:2203.10610, 2022 | 23 | 2022 |
Alexa arena: A user-centric interactive platform for embodied ai Q Gao, G Thattai, S Shakiah, X Gao, S Pansare, V Sharma, G Sukhatme, ... Advances in Neural Information Processing Systems 36, 2024 | 19 | 2024 |
Commonsense justification for action explanation S Yang, Q Gao, S Saba-Sadiya, J Chai Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018 | 15 | 2018 |
Learning to act with affordance-aware multimodal neural slam Z Jia, K Lin, Y Zhao, Q Gao, G Thattai, GS Sukhatme 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2022 | 14 | 2022 |
Luminous: Indoor scene generation for embodied ai challenges Y Zhao, K Lin, Z Jia, Q Gao, G Thattai, J Thomason, GS Sukhatme arXiv preprint arXiv:2111.05527, 2021 | 13 | 2021 |
Are we there yet? learning to localize in embodied instruction following S Storks, Q Gao, G Thattai, G Tur arXiv preprint arXiv:2101.03431, 2021 | 10 | 2021 |
Interactive teaching for conversational ai Q Ping, F Niu, G Thattai, J Chengottusseriyil, Q Gao, A Reganti, ... arXiv preprint arXiv:2012.00958, 2020 | 9 | 2020 |
Inter-functional analysis of high-throughput phenotype data by non-parametric clustering and its application to photosynthesis Q Gao, E Ostendorf, JA Cruz, R Jin, DM Kramer, J Chen Bioinformatics 32 (1), 67-76, 2016 | 9 | 2016 |
Lemma: Learning language-conditioned multi-robot manipulation R Gong, X Gao, Q Gao, S Shakiah, G Thattai, GS Sukhatme IEEE Robotics and Automation Letters, 2023 | 8 | 2023 |
Groundhog: Grounding large language models to holistic segmentation Y Zhang, Z Ma, X Gao, S Shakiah, Q Gao, J Chai Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2024 | 7 | 2024 |
Mastering robot manipulation with multimodal prompts through pretraining and multi-task fine-tuning J Li, Q Gao, M Johnston, X Gao, X He, S Shakiah, H Shi, R Ghanadan, ... arXiv preprint arXiv:2310.09676, 2023 | 5 | 2023 |