Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 1493 | 2023 |
Palm 2 technical report R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ... arXiv preprint arXiv:2305.10403, 2023 | 1219 | 2023 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 1000 | 2022 |
BLiMP: The benchmark of linguistic minimal pairs for English A Warstadt, A Parrish, H Liu, A Mohananey, W Peng, SF Wang, ... Transactions of the Association for Computational Linguistics 8, 377-392, 2020 | 393 | 2020 |
BBQ: A hand-built bias benchmark for question answering A Parrish, A Chen, N Nangia, V Padmakumar, J Phang, J Thompson, ... arXiv preprint arXiv:2110.08193, 2021 | 235 | 2021 |
Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs A Warstadt arXiv preprint arXiv:1909.02597, 2019 | 130 | 2019 |
Dataperf: Benchmarks for data-centric ai development M Mazumder, C Banbury, X Yao, B Karlaš, W Gaviria Rojas, S Diamos, ... Advances in Neural Information Processing Systems 36, 2024 | 110 | 2024 |
QuALITY: Question answering with long input texts, yes! RY Pang, A Parrish, N Joshi, N Nangia, J Phang, A Chen, V Padmakumar, ... arXiv preprint arXiv:2112.08608, 2021 | 98 | 2021 |
Inverse scaling: When bigger isn't better IR McKenzie, A Lyzhov, M Pieler, A Parrish, A Mueller, A Prabhu, ... arXiv preprint arXiv:2306.09479, 2023 | 56 | 2023 |
Gemma 2: Improving open language models at a practical size G Team, M Riviere, S Pathak, PG Sessa, C Hardin, S Bhupatiraju, ... arXiv preprint arXiv:2408.00118, 2024 | 40 | 2024 |
Does putting a linguist in the loop improve NLU data collection? A Parrish, W Huang, O Agha, SH Lee, N Nangia, A Warstadt, K Aggarwal, ... arXiv preprint arXiv:2104.07179, 2021 | 39 | 2021 |
What do nlp researchers believe? results of the nlp community metasurvey J Michael, A Holtzman, A Parrish, A Mueller, A Wang, A Chen, D Madaan, ... arXiv preprint arXiv:2208.12852, 2022 | 29 | 2022 |
Dices dataset: Diversity in conversational ai evaluation for safety L Aroyo, A Taylor, M Diaz, C Homan, A Parrish, G Serapio-García, ... Advances in Neural Information Processing Systems 36, 2024 | 26 | 2024 |
NOPE: A corpus of naturally-occurring presuppositions in English A Parrish, S Schuster, A Warstadt, O Agha, SH Lee, Z Zhao, SR Bowman, ... arXiv preprint arXiv:2109.06987, 2021 | 22 | 2021 |
Two failures of self-consistency in the multi-step reasoning of LLMs A Chen, J Phang, A Parrish, V Padmakumar, C Zhao, SR Bowman, K Cho arXiv preprint arXiv:2305.14279, 2023 | 17 | 2023 |
Single-turn debate does not help humans answer hard reading-comprehension questions A Parrish, H Trivedi, E Perez, A Chen, N Nangia, J Phang, SR Bowman arXiv preprint arXiv:2204.05212, 2022 | 14 | 2022 |
Introducing v0. 5 of the ai safety benchmark from mlcommons B Vidgen, A Agrawal, AM Ahmed, V Akinwande, N Al-Nuaimi, N Alfaraj, ... arXiv preprint arXiv:2404.12241, 2024 | 13 | 2024 |
Conceptual combination in the LATL with and without syntactic composition A Parrish, L Pylkkänen Neurobiology of Language 3 (1), 46-66, 2022 | 13 | 2022 |
Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation J Quaye, A Parrish, O Inel, C Rastogi, HR Kirk, M Kahng, E Van Liemt, ... The 2024 ACM Conference on Fairness, Accountability, and Transparency, 388-406, 2024 | 12* | 2024 |
Two-Turn Debate Doesn't Help Humans Answer Hard Reading Comprehension Questions A Parrish, H Trivedi, N Nangia, V Padmakumar, J Phang, AS Saimbhi, ... arXiv preprint arXiv:2210.10860, 2022 | 10 | 2022 |