Diffusion Models are Minimax Optimal Distribution Estimators K Oko, S Akiyama, T Suzuki Fortieth International Conference on Machine Learning, 2023 | 51 | 2023 |
Particle stochastic dual coordinate ascent: Exponential convergent algorithm for mean field neural network optimization K Oko, T Suzuki, A Nitanda, D Wu International Conference on Learning Representations, 2022 | 12 | 2022 |
Feature learning via mean-field langevin dynamics: classifying sparse parities and beyond T Suzuki, D Wu, K Oko, A Nitanda Advances in Neural Information Processing Systems 36, 2024 | 9 | 2024 |
Symmetric mean-field langevin dynamics for distributional minimax problems J Kim, K Yamamoto, K Oko, Z Yang, T Suzuki arXiv preprint arXiv:2312.01127, 2023 | 6 | 2023 |
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems A Nitanda, K Oko, D Wu, N Takenouchi, T Suzuki Fortieth International Conference on Machine Learning, 2023 | 5 | 2023 |
Nearly Tight Spectral Sparsification of Directed Hypergraphs K Oko, S Sakaue, S Tanigawa 50th International Colloquium on Automata, Languages, and Programming (ICALP …, 2023 | 5* | 2023 |
MOCHA: mobile check-in application for university campuses beyond COVID-19 Y Nishiyama, H Murakami, R Suzuki, K Oko, I Sukeda, K Sezaki, ... Proceedings of the Twenty-Third International Symposium on Theory …, 2022 | 3 | 2022 |
Improved statistical and computational complexity of the mean-field Langevin dynamics under structured data A Nitanda, K Oko, T Suzuki, D Wu The Twelfth International Conference on Learning Representations, 2024 | 2 | 2024 |
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations K Oko, Y Song, T Suzuki, D Wu arXiv preprint arXiv:2406.11828, 2024 | 1 | 2024 |
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit JD Lee, K Oko, T Suzuki, D Wu arXiv preprint arXiv:2406.01581, 2024 | 1 | 2024 |
Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method K Oko, S Akiyama, T Murata, T Suzuki OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022 | 1 | 2022 |
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning K Oko, S Akiyama, T Murata, T Suzuki arXiv preprint arXiv:2209.00361, 2022 | 1 | 2022 |
Flow matching achieves minimax optimal convergence K Fukumizu, T Suzuki, N Isobe, K Oko, M Koyama arXiv preprint arXiv:2405.20879, 2024 | | 2024 |
How Structured Data Guides Feature Learning: A Case Study of Sparse Parity Problem A Nitanda, K Oko, T Suzuki, D Wu Conference on Parsimony and Learning (Recent Spotlight Track), 2023 | | 2023 |
How Structured Data Guides Feature Learning: A Case Study of the Parity Problem A Nitanda, K Oko, T Suzuki, D Wu NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning, 2023 | | 2023 |
SILVER: Single-loop variance reduction and application to federated learning K Oko, S Akiyama, D Wu, T Murata, T Suzuki Forty-first International Conference on Machine Learning, 0 | | |
Mean Field Langevin Actor-Critic: Faster Convergence and Global Optimality beyond Lazy Learning K Yamamoto, K Oko, Z Yang, T Suzuki Forty-first International Conference on Machine Learning, 0 | | |