Follow
Vinodkumar Prabhakaran
Title
Cited by
Cited by
Year
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
46072023
Lamda: Language models for dialog applications
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
arXiv preprint arXiv:2201.08239, 2022
14292022
Language from police body camera footage shows racial disparities in officer respect
R Voigt, NP Camp, V Prabhakaran, WL Hamilton, RC Hetey, CM Griffiths, ...
Proceedings of the National Academy of Sciences 114 (25), 6521-6526, 2017
4982017
Social Biases in NLP Models as Barriers for Persons with Disabilities
B Hutchinson, V Prabhakaran, E Denton, K Webster, Y Zhong, S Denuyl
ACL 2020, 2020
3202020
Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations
AM Davani, M Díaz, V Prabhakaran
Transactions of the Association for Computational Linguistics 10, 92-110, 2022
2922022
Computational Argumentation Quality Assessment in Natural Language
H Wachsmuth, N Naderi, Y Hou, Y Bilu, V Prabhakaran, TA Thijm, G Hirst, ...
2512017
Power to the People? Opportunities and Challenges for Participatory AI
A Birhane, W Isaac, V Prabhakaran, M Díaz, MC Elish, I Gabriel, ...
The second Conference on Equity and Access in Algorithms, Mechanisms, and …, 2022
1912022
Re-imagining algorithmic fairness in india and beyond
N Sambasivan, E Arnesen, B Hutchinson, T Doshi, V Prabhakaran
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and …, 2021
1882021
On Releasing Annotator-Level Labels and Information in Datasets
V Prabhakaran, AM Davani, M Díaz
The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing …, 2021
1372021
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
V Prabhakaran, B Hutchinson, M Mitchell
EMNLP, 2019
1322019
Palm: Scaling language modeling with pathways. arXiv 2022
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
arXiv preprint arXiv:2204.02311 10, 2022
1102022
Committed belief annotation and tagging
MT Diab, L Levin, T Mitamura, O Rambow, V Prabhakaran, W Guo
Proceedings of the Third Linguistic Annotation Workshop (LAW), 68-73, 2009
1102009
RtGender: A corpus for studying differential responses to gender
R Voigt, D Jurgens, V Prabhakaran, D Jurafsky, Y Tsvetkov
Proceedings of the Eleventh International Conference on Language Resources …, 2018
862018
Predicting the Rise and Fall of Scientific Topics from Trends in their Rhetorical Framing
V Prabhakaran, WL Hamilton, D McFarland, D Jurafsky
862016
CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation
M Díaz, ID Kivlichan, R Rosen, DK Baker, R Amironesei, V Prabhakaran, ...
ACM Conference on Fairness Accountability and Transparency, 2022
792022
Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics
D Martin Jr, V Prabhakaran, J Kuhlberg, A Smart, WS Isaac
ICLR Workshop on Machine Learning in Real Life, 2020
792020
Bias and fairness in natural language processing
KW Chang, V Prabhakaran, V Ordonez
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
782019
Automatic committed belief tagging
V Prabhakaran, O Rambow, M Diab
23rd International Conference on Computational Linguistics (COLING): Posters …, 2010
762010
Predicting Overt Display of Power in Written Dialogs
V Prabhakaran, O Rambow, M Diab
Human Language Technologies: The 2012 Annual Conference of the NAACL (North …, 2012
602012
Whose Ground Truth? Accounting for Individual and Collective Identities Underlying Dataset Annotation
E Denton, M Díaz, I Kivlichan, V Prabhakaran, R Rosen
NeurIPS Workshop on Data-centric AI (DCAI), 2021
552021
The system can't perform the operation now. Try again later.
Articles 1–20