关注
Kazusato Oko
Kazusato Oko
在 g.ecc.u-tokyo.ac.jp 的电子邮件经过验证
标题
引用次数
引用次数
年份
Diffusion Models are Minimax Optimal Distribution Estimators
K Oko, S Akiyama, T Suzuki
Fortieth International Conference on Machine Learning, 2023
512023
Particle stochastic dual coordinate ascent: Exponential convergent algorithm for mean field neural network optimization
K Oko, T Suzuki, A Nitanda, D Wu
International Conference on Learning Representations, 2022
122022
Feature learning via mean-field langevin dynamics: classifying sparse parities and beyond
T Suzuki, D Wu, K Oko, A Nitanda
Advances in Neural Information Processing Systems 36, 2024
92024
Symmetric mean-field langevin dynamics for distributional minimax problems
J Kim, K Yamamoto, K Oko, Z Yang, T Suzuki
arXiv preprint arXiv:2312.01127, 2023
62023
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems
A Nitanda, K Oko, D Wu, N Takenouchi, T Suzuki
Fortieth International Conference on Machine Learning, 2023
52023
Nearly Tight Spectral Sparsification of Directed Hypergraphs
K Oko, S Sakaue, S Tanigawa
50th International Colloquium on Automata, Languages, and Programming (ICALP …, 2023
5*2023
MOCHA: mobile check-in application for university campuses beyond COVID-19
Y Nishiyama, H Murakami, R Suzuki, K Oko, I Sukeda, K Sezaki, ...
Proceedings of the Twenty-Third International Symposium on Theory …, 2022
32022
Improved statistical and computational complexity of the mean-field Langevin dynamics under structured data
A Nitanda, K Oko, T Suzuki, D Wu
The Twelfth International Conference on Learning Representations, 2024
22024
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations
K Oko, Y Song, T Suzuki, D Wu
arXiv preprint arXiv:2406.11828, 2024
12024
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
JD Lee, K Oko, T Suzuki, D Wu
arXiv preprint arXiv:2406.01581, 2024
12024
Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method
K Oko, S Akiyama, T Murata, T Suzuki
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022
12022
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
K Oko, S Akiyama, T Murata, T Suzuki
arXiv preprint arXiv:2209.00361, 2022
12022
Flow matching achieves minimax optimal convergence
K Fukumizu, T Suzuki, N Isobe, K Oko, M Koyama
arXiv preprint arXiv:2405.20879, 2024
2024
How Structured Data Guides Feature Learning: A Case Study of Sparse Parity Problem
A Nitanda, K Oko, T Suzuki, D Wu
Conference on Parsimony and Learning (Recent Spotlight Track), 2023
2023
How Structured Data Guides Feature Learning: A Case Study of the Parity Problem
A Nitanda, K Oko, T Suzuki, D Wu
NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning, 2023
2023
SILVER: Single-loop variance reduction and application to federated learning
K Oko, S Akiyama, D Wu, T Murata, T Suzuki
Forty-first International Conference on Machine Learning, 0
Mean Field Langevin Actor-Critic: Faster Convergence and Global Optimality beyond Lazy Learning
K Yamamoto, K Oko, Z Yang, T Suzuki
Forty-first International Conference on Machine Learning, 0
系统目前无法执行此操作,请稍后再试。
文章 1–17