Data boost: Text data augmentation through reinforcement learning guided conditional generation R Liu, G Xu, C Jia, W Ma, L Wang, S Vosoughi EMNLP 2020, 2020 | 100 | 2020 |
Mitigating Political Bias in Language Models through Reinforced Calibration R Liu, C Jia, J Wei, G Xu, L Wang, S Vosoughi Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 2021 | 78 | 2021 |
On the safety of conversational models: Taxonomy, dataset, and benchmark H Sun*, G Xu*, J Deng, J Cheng, C Zheng, H Zhou, N Peng, X Zhu, ... ACL Findings 2022, 2021 | 55 | 2021 |
Quantifying and alleviating political bias in language models R Liu, C Jia, J Wei, G Xu, S Vosoughi Artificial Intelligence 304, 103654, 2022 | 41 | 2022 |
Can Model Compression Improve NLP Fairness G Xu, Q Hu https://arxiv.org/pdf/2201.08542.pdf, 2022 | 25 | 2022 |
Non-Parallel Text Style Transfer with Self-Parallel Supervision R Liu, C Gao, C Jia, G Xu, S Vosoughi ICLR 2022, 2022 | 12 | 2022 |
Enhanced Offensive Language Detection Through Data Augmentation R Liu, G Xu, S Vosoughi The International AAAI Conference on Web and Social Media, 2020 | 10 | 2020 |
EnDex: Evaluation of Dialogue Engagingness at Scale G Xu, R Liu, F Harel-Canada, NR Chandra, N Peng Findings of EMNLP 2022, https://arxiv.org/pdf/2210.12362.pdf, 2022 | 6 | 2022 |
Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children's Fairy Tales PT Isaza, G Xu, A Oloko, Y Hou, N Peng, D Wang ACL 2023, 2023 | 5 | 2023 |
NECE: Narrative Event Chain Extraction Toolkit G Xu, PT Isaza, M Li, A Oloko, B Yao, A Adebiyi, Y Hou, N Peng, D Wang arXiv preprint arXiv:2208.08063, 2022 | 2 | 2022 |
BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback G Pandey, Y Nandwani, T Naseem, M Mishra, G Xu, D Raghu, S Joshi, ... arXiv preprint arXiv:2402.02479, 2024 | 1 | 2024 |
The Joint Training of Transition-Based AMR Parser G Xu University of California, Los Angeles, 2022 | | 2022 |