Follow
Cyprien de Masson d'Autume
Cyprien de Masson d'Autume
Reka AI
Verified email at reka.ai
Title
Cited by
Cited by
Year
Competition-level code generation with alphacode
Y Li, D Choi, J Chung, N Kushman, J Schrittwieser, R Leblond, T Eccles, ...
Science 378 (6624), 1092-1097, 2022
1267*2022
Scaling language models: Methods, analysis & insights from training gopher
JW Rae, S Borgeaud, T Cai, K Millican, J Hoffmann, F Song, J Aslanides, ...
arXiv preprint arXiv:2112.11446, 2021
11552021
Episodic memory in lifelong language learning
C de Masson D'Autume, S Ruder, L Kong, D Yogatama
Advances in Neural Information Processing Systems 32, 2019
2372019
Learning and evaluating general linguistic intelligence
D Yogatama, CM d'Autume, J Connor, T Kocisky, M Chrzanowski, L Kong, ...
arXiv preprint arXiv:1901.11373, 2019
210*2019
Mind the gap: Assessing temporal generalization in neural language models
A Lazaridou, A Kuncoro, E Gribovskaya, D Agrawal, A Liska, T Terzi, ...
Advances in Neural Information Processing Systems 34, 29348-29363, 2021
169*2021
A mutual information maximization perspective of language representation learning
L Kong, CM d'Autume, W Ling, L Yu, Z Dai, D Yogatama
arXiv preprint arXiv:1910.08350, 2019
162*2019
Adaptive semiparametric language models
D Yogatama, C de Masson d’Autume, L Kong
Transactions of the Association for Computational Linguistics 9, 362-373, 2021
1102021
Psychlab: a psychology laboratory for deep reinforcement learning agents
JZ Leibo, CM d'Autume, D Zoran, D Amos, C Beattie, K Anderson, ...
arXiv preprint arXiv:1801.08116, 2018
92*2018
Training language gans from scratch
C de Masson d'Autume, S Mohamed, M Rosca, J Rae
Advances in Neural Information Processing Systems 32, 2019
882019
A systematic investigation of commonsense knowledge in large language models
XL Li, A Kuncoro, J Hoffmann, CM d'Autume, P Blunsom, A Nematzadeh
arXiv preprint arXiv:2111.00607, 2021
612021
Pitfalls of static language modelling
A Lazaridou, A Kuncoro, E Gribovskaya, D Agrawal, A Liska, T Terzi, ...
arXiv preprint arXiv:2102.01951, 2021
55*2021
Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models
A Liska, T Kocisky, E Gribovskaya, T Terzi, E Sezener, D Agrawal, ...
International Conference on Machine Learning, 13604-13622, 2022
532022
Scaling Language Models: Methods
JW Rae, S Borgeaud, T Cai, K Millican, J Hoffmann, F Song, J Aslanides, ...
Analysis & Insights from Training Gopher. arXiv, 2021
322021
Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
P Padlewski, M Bain, M Henderson, Z Zhu, N Relan, H Pham, D Ong, ...
arXiv preprint arXiv:2405.02287, 2024
192024
Scaling language models: Methods, analysis & insights from training gopher. arXiv 2021
JW Rae, S Borgeaud, T Cai, K Millican, J Hoffmann, F Song, J Aslanides, ...
arXiv preprint arXiv:2112.11446, 2021
172021
Do language models learn commonsense knowledge
XL Li, CMA Adhiguna Kuncoro, P Blunsom, A Nematzadeh
arXiv preprint arXiv:2111.00607, 2021
52021
A systematic investigation of commonsense understanding in large language models
XL Li, A Kuncoro, CM d’Autume, P Blunsom, A Nematzadeh
CoRR, abs/2111.00607 1, 2021
42021
Sentence encoding with tree-constrained relation networks
L Yu, CM d'Autume, C Dyer, P Blunsom, L Kong, W Ling
arXiv preprint arXiv:1811.10475, 2018
42018
Reka core, flash, and edge: A series of powerful multimodal language models
R Team, A Ormazabal, C Zheng, CM d'Autume, D Yogatama, D Fu, D Ong, ...
arXiv preprint arXiv:2404.12387, 2024
22024
Computer code generation from task descriptions using neural networks
Y Li, DH Choi, J Chung, NA Kushman, J Schrittwieser, R Leblond, ...
US Patent App. 18/105,211, 2023
12023
The system can't perform the operation now. Try again later.
Articles 1–20