Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 2185 | 2023 |
Training compute-optimal large language models J Hoffmann, S Borgeaud, A Mensch, E Buchatskaya, T Cai, E Rutherford, ... arXiv preprint arXiv:2203.15556, 2022 | 1143 | 2022 |
Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 924 | 2023 |
Scaling Language Models: Methods, Analysis & Insights from Training Gopher JW Rae, S Borgeaud, T Cai, K Millican, J Hoffmann, F Song, J Aslanides, ... | 919 | 2021 |
A clinically applicable approach to continuous prediction of future acute kidney injury N Tomašev, X Glorot, JW Rae, M Zielinski, H Askham, A Saraiva, ... Nature 572 (7767), 116-119, 2019 | 862 | 2019 |
Improving language models by retrieving from trillions of tokens S Borgeaud, A Mensch, J Hoffmann, T Cai, E Rutherford, K Millican, ... International conference on machine learning, 2206-2240, 2022 | 764 | 2022 |
Compressive transformers for long-range sequence modelling JW Rae, A Potapenko, SM Jayakumar, TP Lillicrap arXiv preprint arXiv:1911.05507, 2019 | 494 | 2019 |
Stabilizing transformers for reinforcement learning E Parisotto, F Song, J Rae, R Pascanu, C Gulcehre, S Jayakumar, ... International conference on machine learning, 7487-7498, 2020 | 356 | 2020 |
Model-free episodic control C Blundell, B Uria, A Pritzel, Y Li, A Ruderman, JZ Leibo, J Rae, ... arXiv preprint arXiv:1606.04460, 2016 | 295 | 2016 |
Relational recurrent neural networks A Santoro, R Faulkner, D Raposo, J Rae, M Chrzanowski, T Weber, ... Advances in neural information processing systems 31, 2018 | 264 | 2018 |
Neural arithmetic logic units A Trask, F Hill, SE Reed, J Rae, C Dyer, P Blunsom Advances in neural information processing systems 31, 2018 | 242 | 2018 |
Unsupervised predictive memory in a goal-directed agent G Wayne, CC Hung, D Amos, M Mirza, A Ahuja, A Grabska-Barwinska, ... arXiv preprint arXiv:1803.10760, 2018 | 196 | 2018 |
Scaling memory-augmented neural networks with sparse reads and writes J Rae, JJ Hunt, I Danihelka, T Harley, AW Senior, G Wayne, A Graves, ... Advances in Neural Information Processing Systems 29, 2016 | 182 | 2016 |
Reducing sentiment bias in language models via counterfactual evaluation PS Huang, H Zhang, R Jiang, R Stanforth, J Welbl, J Rae, V Maini, ... arXiv preprint arXiv:1911.03064, 2019 | 177 | 2019 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ... arXiv preprint arXiv:2403.05530, 2024 | 156 | 2024 |
Multiplicative interactions and where to find them SM Jayakumar, WM Czarnecki, J Menick, J Schwarz, J Rae, S Osindero, ... International conference on learning representations, 2020 | 127 | 2020 |
V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control HF Song, A Abdolmaleki, JT Springenberg, A Clark, H Soyer, JW Rae, ... arXiv preprint arXiv:1909.12238, 2019 | 111 | 2019 |
Memory-based parameter adaptation P Sprechmann, SM Jayakumar, JW Rae, A Pritzel, AP Badia, B Uria, ... International Conference on Learning Representations, 2018 | 107 | 2018 |
An empirical analysis of compute-optimal large language model training J Hoffmann, S Borgeaud, A Mensch, E Buchatskaya, T Cai, E Rutherford, ... Advances in Neural Information Processing Systems 35, 30016-30030, 2022 | 104 | 2022 |
Top-kast: Top-k always sparse training S Jayakumar, R Pascanu, J Rae, S Osindero, E Elsen Advances in Neural Information Processing Systems 33, 20744-20754, 2020 | 90 | 2020 |