Mitigating manipulation in peer review via randomized reviewer assignments S Jecmen, H Zhang, R Liu, N Shah, V Conitzer, F Fang Advances in Neural Information Processing Systems 33, 12533-12545, 2020 | 70 | 2020 |
Reviewergpt? an exploratory study on using large language models for paper reviewing R Liu, NB Shah arXiv preprint arXiv:2306.00622, 2023 | 29 | 2023 |
Llms as workers in human-computational algorithms? replicating crowdsourcing pipelines with llms T Wu, H Zhu, M Albayrak, A Axon, A Bertsch, W Deng, Z Ding, B Guo, ... arXiv preprint arXiv:2307.10168, 2023 | 18 | 2023 |
Near-optimal reviewer splitting in two-phase paper reviewing and conference experiment design S Jecmen, H Zhang, R Liu, F Fang, V Conitzer, NB Shah Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10 …, 2022 | 11 | 2022 |
Cite-seeing and reviewing: A study on citation bias in peer review I Stelmakh, C Rastogi, R Liu, S Chawla, F Echenique, NB Shah Plos one 18 (7), e0283980, 2023 | 9 | 2023 |
Improving interpersonal communication by simulating audiences with language models R Liu, H Yen, R Marjieh, TL Griffiths, R Krishna arXiv preprint arXiv:2311.00687, 2023 | 6 | 2023 |
Api-assisted code generation for question answering on varied table structures Y Cao, S Chen, R Liu, Z Wang, D Fried arXiv preprint arXiv:2310.14687, 2023 | 6 | 2023 |
How do Large Language Models Navigate Conflicts between Honesty and Helpfulness? R Liu, TR Sumers, I Dasgupta, TL Griffiths arXiv preprint arXiv:2402.07282, 2024 | 5 | 2024 |
Testing for Reviewer Anchoring in Peer Review: A Randomized Controlled Trial R Liu, S Jecmen, V Conitzer, F Fang, NB Shah arXiv preprint arXiv:2307.05443, 2023 | 2 | 2023 |
Large Language Models Assume People are More Rational than We Really are R Liu, J Geng, JC Peterson, I Sucholutsky, TL Griffiths arXiv preprint arXiv:2406.17055, 2024 | | 2024 |