关注
Charlie Hou
Charlie Hou
在 andrew.cmu.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
SquirRL: Automating attack analysis on blockchain incentive mechanisms with deep reinforcement learning
C Hou, M Zhou, Y Ji, P Daian, F Tramer, G Fanti, A Juels
arXiv preprint arXiv:1912.01798, 2019
932019
Efficient algorithms for federated saddle point optimization
C Hou, KK Thekumparampil, G Fanti, S Oh
arXiv preprint arXiv:2102.06333, 2021
222021
FeDChain: Chained algorithms for near-optimal communication cost in federated learning
C Hou, KK Thekumparampil, G Fanti, S Oh
arXiv preprint arXiv:2108.06869, 2021
122021
Privately customizing prefinetuning to better match user data in federated learning
C Hou, H Zhan, A Shrivastava, S Wang, A Livshits, G Fanti, D Lazar
arXiv preprint arXiv:2302.09042, 2023
72023
Reducing the communication cost of federated learning through multistage optimization
C Hou, KK Thekumparampil, G Fanti, S Oh
arXiv preprint arXiv:2108.06869, 2021
52021
Multistage stepsize schedule in federated learning: Bridging theory and practice
GFC Hou, K Thekumparampil, S Oh
ICML Workshop 12, 2021
32021
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?
S Ke, C Hou, G Fanti, S Oh
arXiv preprint arXiv:2402.18905, 2024
12024
Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity
C Hou, KK Thekumparampil, M Shavlovsky, G Fanti, Y Dattatreya, ...
arXiv preprint arXiv:2308.00177, 2023
12023
FedChain: Chained Algorithms for Near-optimal Communication Cost in Federated Learning
C Hou, KK Thekumparampil, G Fanti, S Oh
International Conference on Learning Representations, 2021
2021
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
C Hou, A Shrivastava, H Zhan, R Conway, T Le, A Sagar, G Fanti, D Lazar
ICML 2024, 0
系统目前无法执行此操作,请稍后再试。
文章 1–10