SquirRL: Automating attack analysis on blockchain incentive mechanisms with deep reinforcement learning C Hou, M Zhou, Y Ji, P Daian, F Tramer, G Fanti, A Juels arXiv preprint arXiv:1912.01798, 2019 | 93 | 2019 |
Efficient algorithms for federated saddle point optimization C Hou, KK Thekumparampil, G Fanti, S Oh arXiv preprint arXiv:2102.06333, 2021 | 22 | 2021 |
FeDChain: Chained algorithms for near-optimal communication cost in federated learning C Hou, KK Thekumparampil, G Fanti, S Oh arXiv preprint arXiv:2108.06869, 2021 | 12 | 2021 |
Privately customizing prefinetuning to better match user data in federated learning C Hou, H Zhan, A Shrivastava, S Wang, A Livshits, G Fanti, D Lazar arXiv preprint arXiv:2302.09042, 2023 | 7 | 2023 |
Reducing the communication cost of federated learning through multistage optimization C Hou, KK Thekumparampil, G Fanti, S Oh arXiv preprint arXiv:2108.06869, 2021 | 5 | 2021 |
Multistage stepsize schedule in federated learning: Bridging theory and practice GFC Hou, K Thekumparampil, S Oh ICML Workshop 12, 2021 | 3 | 2021 |
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune? S Ke, C Hou, G Fanti, S Oh arXiv preprint arXiv:2402.18905, 2024 | 1 | 2024 |
Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity C Hou, KK Thekumparampil, M Shavlovsky, G Fanti, Y Dattatreya, ... arXiv preprint arXiv:2308.00177, 2023 | 1 | 2023 |
FedChain: Chained Algorithms for Near-optimal Communication Cost in Federated Learning C Hou, KK Thekumparampil, G Fanti, S Oh International Conference on Learning Representations, 2021 | | 2021 |
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs C Hou, A Shrivastava, H Zhan, R Conway, T Le, A Sagar, G Fanti, D Lazar ICML 2024, 0 | | |