Adversarial Weight Perturbation Helps Robust Generalization D Wu, ST Xia, Y Wang Conference on Neural Information Processing Systems (NeurIPS 2020), 2020 | 689 | 2020 |
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with Resnets D Wu, Y Wang, ST Xia, J Bailey, X Ma International Conference on Learning Representations (ICLR 2020), 2020 | 338 | 2020 |
Adversarial Neuron Pruning Purifies Backdoored Deep Models D Wu, Y Wang Conference on Neural Information Processing Systems (NeurIPS 2021), 2021 | 217 | 2021 |
Targeted Attack for Deep Hashing based Retrieval J Bai, B Chen, Y Li, D Wu, W Guo, S Xia, E Yang European Conference on Computer Vision (ECCV 2020), 2020 | 87 | 2020 |
When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture Y Mo, D Wu, Y Wang, Y Guo, Y Wang Conference on Neural Information Processing Systems (NeurIPS 2022), 2022 | 41 | 2022 |
Not all samples are born equal: Towards effective clean-label backdoor attacks Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST Xia Pattern Recognition 139, 109512, 2023 | 32 | 2023 |
Dipdefend: Deep image prior driven defense against adversarial examples T Dai, Y Feng, D Wu, B Chen, J Lu, Y Jiang, ST Xia Proceedings of the 28th ACM International Conference on Multimedia, 1404-1412, 2020 | 23 | 2020 |
On the effectiveness of adversarial training against backdoor attacks Y Gao, D Wu, J Zhang, G Gan, ST Xia, G Niu, M Sugiyama IEEE Transactions on Neural Networks and Learning Systems, 2023 | 14 | 2023 |
Towards Robust Model Watermark via Reducing Parametric Vulnerability G Gan, Y Li, D Wu, ST Xia International Conference on Computer Vision (ICCV 2023), 2023 | 7 | 2023 |
Backdoor attack on hash-based image retrieval via clean-label data poisoning K Gao, J Bai, B Chen, D Wu, ST Xia arXiv preprint arXiv:2109.08868, 2021 | 6 | 2021 |
Universal adversarial head: Practical protection against video data leakage J Bai, B Chen, D Wu, C Zhang, ST Xia ICML 2021 Workshop on Adversarial Machine Learning, 2021 | 5 | 2021 |
Matrix Smoothing: A Regularization For Dnn With Transition Matrix Under Noisy Labels X Lv, D Wu, ST Xia 2020 IEEE International Conference on Multimedia and Expo (ICME 2020), 1-6, 2020 | 3 | 2020 |
Temporal Calibrated Regularization for Robust Noisy Label Learning D Wu, Y Wang, Z Zheng, S Xia International Joint Conference on Neural Networks (IJCNN 2020), 2020 | 2 | 2020 |
Rethinking the Necessity of Labels in Backdoor Removal Z Xiong, D Wu, Y Wang, Y Wang ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning, 2023 | 1 | 2023 |
Does Adversarial Robustness Really Imply Backdoor Vulnerability? Y Gao, D Wu, J Zhang, ST Xia, G Niu, M Sugiyama | 1 | 2021 |
Towards Reliable Backdoor Attacks on Vision Transformers Y Mo, D Wu, Y Wang, Y Guo, Y Wang | | |
Do We Really Need Labels for Backdoor Defense? Z Xiong, D Wu, Y Wang, Y Wang | | |