Language models are few-shot learners T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ... Advances in neural information processing systems 33, 1877-1901, 2020 | 27245 | 2020 |
Hierarchical text-conditional image generation with clip latents A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen arXiv preprint arXiv:2204.06125 1 (2), 3, 2022 | 4714 | 2022 |
Zero-shot text-to-image generation A Ramesh, M Pavlov, G Goh, S Gray, C Voss, A Radford, M Chen, ... International conference on machine learning, 8821-8831, 2021 | 3943 | 2021 |
Evaluating large language models trained on code M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ... arXiv preprint arXiv:2107.03374, 2021 | 2409* | 2021 |
Glide: Towards photorealistic image generation and editing with text-guided diffusion models A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B McGrew, ... arXiv preprint arXiv:2112.10741, 2021 | 2370 | 2021 |
Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 1567 | 2023 |
Generative pretraining from pixels M Chen, A Radford, R Child, J Wu, H Jun, D Luan, I Sutskever International conference on machine learning, 1691-1703, 2020 | 1534 | 2020 |
Training verifiers to solve math word problems K Cobbe, V Kosaraju, M Bavarian, M Chen, H Jun, L Kaiser, M Plappert, ... arXiv preprint arXiv:2110.14168, 2021 | 1247 | 2021 |
Consistency models Y Song, P Dhariwal, M Chen, I Sutskever arXiv preprint arXiv:2303.01469, 2023 | 385 | 2023 |
Point-e: A system for generating 3d point clouds from complex prompts A Nichol, H Jun, P Dhariwal, P Mishkin, M Chen arXiv preprint arXiv:2212.08751, 2022 | 312 | 2022 |
Scaling laws for autoregressive generative modeling T Henighan, J Kaplan, M Katz, M Chen, C Hesse, J Jackson, H Jun, ... arXiv preprint arXiv:2010.14701, 2020 | 265 | 2020 |
Hierarchical text-conditional image generation with clip latents. arXiv 2022 A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen arXiv preprint arXiv:2204.06125, 2022 | 141 | 2022 |
Efficient training of language models to fill in the middle M Bavarian, H Jun, N Tezak, J Schulman, C McLeavey, J Tworek, M Chen arXiv preprint arXiv:2207.14255, 2022 | 96 | 2022 |
DALL· E: Creating images from text A Ramesh, M Pavlov, G Goh, S Gray, M Chen, R Child, V Misra, P Mishkin, ... OpenAI blog. https://openai. com/blog/dall-e, 2021 | 84 | 2021 |
Language models are few-shot learners B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ... arXiv preprint arXiv:2005.14165, 2020 | 64 | 2020 |
Distribution augmentation for generative modeling H Jun, R Child, M Chen, J Schulman, A Ramesh, A Radford, I Sutskever International Conference on Machine Learning, 5006-5019, 2020 | 54 | 2020 |
Evaluating large language models trained on code. arXiv 2021 M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ... arXiv preprint arXiv:2107.03374 10, 2021 | 50 | 2021 |
Using temporal correlations and full distributions to separate intrinsic and extrinsic fluctuations in biological systems A Hilfinger, M Chen, J Paulsson Physical review letters 109 (24), 248104, 2012 | 21 | 2012 |
Systems and methods for hierarchical text-conditional image generation A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen US Patent 11,922,550, 2024 | | 2024 |
Systems and methods for generating natural language using language models trained on computer code M Chen, J Tworek, I Sutskever, W Zaremba, JUN Heewoo, HPDEO PINTO US Patent App. 18/321,921, 2024 | | 2024 |