Follow
Ethan Perez
Ethan Perez
Anthropic; New York University
Verified email at anthropic.com - Homepage
Title
Cited by
Cited by
Year
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
P Lewis, E Perez, A Piktus, F Petroni, V Karpukhin, N Goyal, H Küttler, ...
NeurIPS 2020, 2020
48662020
FiLM: Visual Reasoning with a General Conditioning Layer
E Perez, F Strub, H De Vries, V Dumoulin, A Courville
AAAI 2018, 2018
22682018
Constitutional AI: Harmlessness from AI Feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
11462022
ELI5: Long Form Question Answering
A Fan, Y Jernite*, E Perez*, D Grangier, J Weston, M Auli
Association for Computational Linguistics (ACL) 2019, 2019
5322019
Red teaming language models with language models
E Perez, S Huang, F Song, T Cai, R Ring, J Aslanides, A Glaese, ...
EMNLP 2022, 2022
5252022
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
4202022
True Few-Shot Learning with Language Models
E Perez, D Kiela, K Cho
NeurIPS 2021, 2021
4122021
Language models don't always say what they think: unfaithful explanations in chain-of-thought prompting
M Turpin, J Michael, E Perez, S Bowman
Advances in Neural Information Processing Systems 36, 2024
2822024
Supervised multimodal bitransformers for classifying images and text
D Kiela, S Bhooshan, H Firooz, E Perez, D Testuggine
arXiv preprint arXiv:1909.02950, 2019
2742019
Discovering language model behaviors with model-written evaluations
E Perez, S Ringer, K Lukošiūtė, K Nguyen, E Chen, S Heiner, C Pettit, ...
arXiv preprint arXiv:2212.09251, 2022
2202022
Feature-wise transformations
V Dumoulin, E Perez, N Schucher, F Strub, H Vries, A Courville, Y Bengio
Distill 3 (7), e11, 2018
211*2018
Pretraining language models with human preferences
T Korbak, K Shi, A Chen, R Bhalerao, CL Buckley, J Phang, SR Bowman, ...
ICML 2023, 2023
1802023
Unsupervised Question Decomposition for Question Answering
E Perez, P Lewis, W Yih, K Cho, D Kiela
EMNLP 2020, 2020
1702020
Language models (mostly) know what they know
S Kadavath, T Conerly, A Askell, T Henighan, D Drain, E Perez, ...
arXiv preprint arXiv:2207.05221, 2022
1442022
HoME: a Household Multimodal Environment
S Brodeur, E Perez*, A Anand*, F Golemo*, L Celotti, F Strub, J Rouat, ...
ICLR 2018 Workshop, 2017
1352017
Towards understanding sycophancy in language models
M Sharma, M Tong, T Korbak, D Duvenaud, A Askell, SR Bowman, ...
ICLR, 2023
1322023
The capacity for moral self-correction in large language models
D Ganguli, A Askell, N Schiefer, TI Liao, K Lukošiūtė, A Chen, A Goldie, ...
arXiv preprint arXiv:2302.07459, 2023
1322023
Studying large language model generalization with influence functions
R Grosse, J Bae, C Anil, N Elhage, A Tamkin, A Tajdini, B Steiner, D Li, ...
arXiv preprint arXiv:2308.03296, 2023
1152023
Training language models with language feedback at scale
J Scheurer, JA Campos, T Korbak, JS Chan, A Chen, K Cho, E Perez
arXiv preprint arXiv:2303.16755, 2023
892023
Measuring progress on scalable oversight for large language models
SR Bowman, J Hyun, E Perez, E Chen, C Pettit, S Heiner, K Lukošiūtė, ...
arXiv preprint arXiv:2211.03540, 2022
872022
The system can't perform the operation now. Try again later.
Articles 1–20