Large language models can be guided to evade ai-generated text detection N Lu, S Liu, R He, Q Wang, YS Ong, K Tang arXiv preprint arXiv:2305.10847, 2023 | 24 | 2023 |
Multi-domain active learning: Literature review and comparative study R He, S Liu, S He, K Tang IEEE Transactions on Emerging Topics in Computational Intelligence 7 (3 …, 2022 | 19* | 2022 |
Dataset condensation for recommendation J Wu, W Fan, S Liu, Q Liu, R He, Q Li, K Tang arXiv preprint arXiv:2310.01038, 2023 | 4 | 2023 |
Perturbation-based two-stage multi-domain active learning R He, Z Dai, S He, K Tang Proceedings of the 32nd ACM International Conference on Information and …, 2023 | 2 | 2023 |
Multi-Domain Learning From Insufficient Annotations R He, S Liu, J Wu, S He, K Tang European Conference on Artificial Intelligence 372, 1028--1035, 2023 | 2 | 2023 |