关注
Samyak Jain
标题
引用次数
引用次数
年份
Efficient and effective augmentation strategy for adversarial training
S Addepalli, S Jain
Advances in Neural Information Processing Systems 35, 1488-1501, 2022
442022
Scaling adversarial training to large perturbation bounds
S Addepalli, S Jain, G Sriramanan, R Venkatesh Babu
European Conference on Computer Vision, 301-316, 2022
40*2022
Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks
S Jain, R Kirk, ES Lubana, RP Dick, H Tanaka, E Grefenstette, ...
arXiv preprint arXiv:2311.12786, 2023
262023
Dart: Diversify-aggregate-repeat training improves generalization of neural networks
S Jain, S Addepalli, PK Sahu, P Dey, RV Babu
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
202023
Boosting adversarial robustness using feature level stochastic smoothing
S Addepalli, S Jain, G Sriramanan, RV Babu
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
102021
What Makes and Breaks Safety Fine-tuning? Mechanistic Study
S Jain, ES Lubana, K Oksuz, T Joy, PHS Torr, A Sanyal, PK Dokania
arXiv preprint arXiv:2407.10264, 2024
2024
How does fine-tuning affect your model? Mechanistic analysis on procedural tasks
S Jain, R Kirk, ES Lubana, RP Dick, H Tanaka, T Rocktäschel, ...
R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Large Foundation …, 0
Supplementary: DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
S Jain, S Addepalli, PK Sahu, P Dey, RV Babu
Supplementary Material: Towards Achieving Adversarial Robustness Beyond Perceptual Limits
S Addepalli, S Jain, G Sriramanan, S Khare, RV Babu
Supplementary Material: Scaling Adversarial Training to Large Perturbation Bounds
S Addepalli, S Jain, G Sriramanan, RV Babu
Supplementary Material: Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
S Addepalli, S Jain, G Sriramanan, RV Babu
系统目前无法执行此操作,请稍后再试。
文章 1–11