关注
Yangyi Chen
Yangyi Chen
在 illinois.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Onion: A simple and effective defense against textual backdoor attacks
F Qi*, Y Chen*, M Li, Z Liu, M Sun
EMNLP, 2021
1692021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
F Qi*, M Li*, Y Chen*, Z Zhang, Z Liu, Y Wang, M Sun
ACL, 2021
1532021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
F Qi*, Y Chen*, X Zhang, M Li, Z Liu, M Sun
EMNLP, 2021
1092021
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
L Xu, Y Chen, G Cui, H Gao, Z Liu
Findings of NAACL, 2022
572022
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
X Wang*, Z Wang*, J Liu, Y Chen, L Yuan, H Peng, H Ji
ICLR, 2024
442024
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
G Cui*, L Yuan*, B He, Y Chen, Z Liu, M Sun
NeurIPS (Dataset and Benchmark Track), 2022
442022
A Close Look into the Calibration of Pre-trained Language Models
Y Chen*, L Yuan*, G Cui, Z Liu, H Ji
ACL, 2023
342023
Multi-granularity Textual Adversarial Attack with Behavior Cloning
Y Chen*, J Su*, W Wei
EMNLP, 2021
322021
Bridge the gap between CV and NLP! a gradient-based textual adversarial attack framework
L Yuan*, Y Zhang*, Y Chen, W Wei
Findings of ACL, 2023
262023
R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
H Zhang*, S Diao*, Y Lin*, YR Fung, Q Lian, X Wang, Y Chen, H Ji, ...
NAACL, 2024
252024
Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
L Yuan, Y Chen, G Cui, H Gao, F Zou, X Cheng, H Ji, Z Liu, M Sun
NeurIPS (Dataset and Benchmark Track), 2023
252023
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Y Chen*, H Gao*, G Cui, F Qi, L Huang, Z Liu, M Sun
EMNLP, 2022
242022
CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
L Yuan*, Y Chen*, X Wang, YR Fung, H Peng, H Ji
ICLR, 2024
212024
Dress: Instructing large vision-language models to align and interact with humans via natural language feedback
Y Chen, K Sikka, M Cogswell, H Ji, A Divakaran
CVPR, 2024
162024
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
B Zhu*, Y Qin*, G Cui, Y Chen, W Zhao, C Fu, Y Deng, Z Liu, J Wang, ...
NeurIPS, 2022
132022
Measuring and improving chain-of-thought reasoning in vision-language models
Y Chen, K Sikka, M Cogswell, H Ji, A Divakaran
NAACL, 2024
112024
Executable code actions elicit better llm agents
X Wang, Y Chen, L Yuan, Y Zhang, Y Li, H Peng, H Ji
ICML, 2024
102024
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Y Chen*, F Qi*, H Gao, Z Liu, M Sun
EMNLP, 2022
102022
Automatic Construction of Sememe Knowledge Bases via Dictionaries
F Qi, Y Chen, F Wang, Z Liu, X Chen, M Sun
Findings of ACL, 2021
72021
Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge
G Liu, X Wang, L Yuan, Y Chen, H Peng
arXiv preprint arXiv:2311.09731, 2023
4*2023
系统目前无法执行此操作,请稍后再试。
文章 1–20