关注
Kevin Eykholt
Kevin Eykholt
在 ibm.com 的电子邮件经过验证
标题
引用次数
年份
{URET}: Universal Robustness Evaluation Toolkit (for Evasion)
K Eykholt, T Lee, D Schales, J Jang, I Molloy
32nd USENIX Security Symposium (USENIX Security 23), 3817-3833, 2023
42023
A Study of the Effects of Transfer Learning on Adversarial Robustness
P Vaishnavi, K Eykholt, A Rahmati
Transactions on Machine Learning Research, 0
Accelerating certified robustness training via knowledge transfer
P Vaishnavi, K Eykholt, A Rahmati
Advances in Neural Information Processing Systems 35, 5269-5281, 2022
42022
Adaptive robustness certification against adversarial examples
K Eykholt, T Lee, J Jang, S Wang, IM Molloy
US Patent App. 17/113,927, 2022
12022
Adaptive verifiable training using pairwise class similarity
S Wang, K Eykholt, T Lee, J Jang, I Molloy
Proceedings of the AAAI Conference on Artificial Intelligence 35 (11), 10201 …, 2021
32021
Ares: A system-oriented wargame framework for adversarial ml
F Ahmed, P Vaishnavi, K Eykholt, A Rahmati
2022 IEEE Security and Privacy Workshops (SPW), 73-79, 2022
52022
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models
N Baracaldo, F Ahmed, K Eykholt, Y Zhou, S Priya, T Lee, S Kadhe, M Tan, ...
2023 IEEE Security and Privacy Workshops (SPW), 45-56, 2023
2023
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models
NB Angel, F Ahmed, K Eykholt, Y Zhou, S Priya, T Lee, SR Kadhe, M Tan, ...
IEEE Symposium on Security and Privacy, 2023
2023
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model
N Baracaldo, K Eykholt, F Ahmed, Y Zhou, S Priya, T Lee, S Kadhe, Y Tan, ...
Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022, 2022
12022
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model
NB Angel, K Eykholt, F Ahmed, Y Zhou, S Priya, T Lee, SR Kadhe, M Tan, ...
Annual Conference on Neural Information Processing Systems, 2022
12022
Can attention masks improve adversarial robustness?
P Vaishnavi, T Cong, K Eykholt, A Prakash, A Rahmati
International Workshop on Engineering Dependable and Secure Machine Learning …, 2020
102020
Constraining neural networks for robustness through alternative encoding
K Eykholt, T Lee, IM Molloy, J Jang
US Patent 11,847,555, 2023
22023
Designing adversarially resilient classifiers using resilient feature engineering
K Eykholt, A Prakash
arXiv preprint arXiv:1812.06626, 2018
32018
Designing and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms
K Eykholt
22019
DeTA: Minimizing Data Leaks in Federated Learning via Decentralized and Trustworthy Aggregation
PC Cheng, K Eykholt, Z Gu, H Jamjoom, KR Jayaram, E Valdez, A Verma
Proceedings of the Nineteenth European Conference on Computer Systems, 219-235, 2024
12024
EdgeTorrent: Real-time Temporal Graph Representations for Intrusion Detection
IJ King, X Shu, J Jang, K Eykholt, T Lee, HH Huang
Proceedings of the 26th International Symposium on Research in Attacks …, 2023
22023
Ensuring Authorized Updates in Multi-user {Database-Backed} Applications
K Eykholt, A Prakash, B Mozafari
26th USENIX Security Symposium (USENIX Security 17), 1445-1462, 2017
52017
Federated learning with partitioned and dynamically-shuffled model updates
Z Gu, JK Radhakrishnan, A Verma, E Valdez, PC Cheng, HT Jamjoom, ...
US Patent App. 17/323,099, 2022
2022
Graph exploration framework for adversarial example generation
T Lee, K Eykholt, DL Schales, J Jang, IM Molloy
US Patent App. 17/536,059, 2023
12023
Graph neural network (gnn) training using meta-path neighbor sampling and contrastive learning
D She, X Shu, K Eykholt, J Jang
US Patent App. 17/480,012, 2023
12023
系统目前无法执行此操作,请稍后再试。
文章 1–20