受强制性开放获取政策约束的文章 - Dongxian Wu了解详情
无法在其他位置公开访问的文章:2 篇
Not all samples are born equal: Towards effective clean-label backdoor attacks
Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST Xia
Pattern Recognition 139, 109512, 2023
强制性开放获取政策: 国家自然科学基金委员会
Dipdefend: Deep image prior driven defense against adversarial examples
T Dai, Y Feng, D Wu, B Chen, J Lu, Y Jiang, ST Xia
Proceedings of the 28th ACM International Conference on Multimedia, 1404-1412, 2020
强制性开放获取政策: 国家自然科学基金委员会
可在其他位置公开访问的文章:9 篇
Adversarial Weight Perturbation Helps Robust Generalization
D Wu, ST Xia, Y Wang
Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
强制性开放获取政策: 国家自然科学基金委员会
Adversarial Neuron Pruning Purifies Backdoored Deep Models
D Wu, Y Wang
Conference on Neural Information Processing Systems (NeurIPS 2021), 2021
强制性开放获取政策: 国家自然科学基金委员会
Targeted Attack for Deep Hashing based Retrieval
J Bai, B Chen, Y Li, D Wu, W Guo, S Xia, E Yang
European Conference on Computer Vision (ECCV 2020), 2020
强制性开放获取政策: Natural Sciences and Engineering Research Council of Canada, 国家自然科学基 …
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture
Y Mo, D Wu, Y Wang, Y Guo, Y Wang
Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
强制性开放获取政策: 国家自然科学基金委员会
On the effectiveness of adversarial training against backdoor attacks
Y Gao, D Wu, J Zhang, G Gan, ST Xia, G Niu, M Sugiyama
IEEE Transactions on Neural Networks and Learning Systems, 2023
强制性开放获取政策: 国家自然科学基金委员会, Japan Science and Technology Agency
Towards Robust Model Watermark via Reducing Parametric Vulnerability
G Gan, Y Li, D Wu, ST Xia
International Conference on Computer Vision (ICCV 2023), 2023
强制性开放获取政策: 国家自然科学基金委员会
Matrix Smoothing: A Regularization For Dnn With Transition Matrix Under Noisy Labels
X Lv, D Wu, ST Xia
2020 IEEE International Conference on Multimedia and Expo (ICME 2020), 1-6, 2020
强制性开放获取政策: 国家自然科学基金委员会
Temporal Calibrated Regularization for Robust Noisy Label Learning
D Wu, Y Wang, Z Zheng, S Xia
International Joint Conference on Neural Networks (IJCNN 2020), 2020
强制性开放获取政策: 国家自然科学基金委员会
Rethinking the Necessity of Labels in Backdoor Removal
Z Xiong, D Wu, Y Wang, Y Wang
ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning, 2023
强制性开放获取政策: 国家自然科学基金委员会
出版信息和资助信息由计算机程序自动确定