Follow
Yada Pruksachatkun
Yada Pruksachatkun
Verified email at nyu.edu
Title
Cited by
Cited by
Year
Superglue: A stickier benchmark for general-purpose language understanding systems
A Wang, Y Pruksachatkun, N Nangia, A Singh, J Michael, F Hill, O Levy, ...
Advances in neural information processing systems 32, 2019
18772019
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
10952022
Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
arXiv preprint arXiv:2005.00628, 2020
2662020
Bold: Dataset and metrics for measuring biases in open-ended language generation
J Dhamala, T Sun, V Kumar, S Krishna, Y Pruksachatkun, KW Chang, ...
Proceedings of the 2021 ACM conference on fairness, accountability, and …, 2021
1852021
English intermediate-task training improves zero-shot cross-lingual transfer too
J Phang, I Calixto, PM Htut, Y Pruksachatkun, H Liu, C Vania, K Kann, ...
arXiv preprint arXiv:2005.13013, 2020
672020
On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations
YT Cao, Y Pruksachatkun, KW Chang, R Gupta, V Kumar, J Dhamala, ...
arXiv preprint arXiv:2203.13928, 2022
622022
Moments of change: Analyzing peer-based cognitive support in online mental health forums
Y Pruksachatkun, SR Pendse, A Sharma
Proceedings of the 2019 CHI conference on human factors in computing systems …, 2019
562019
jiant 1.2: A software toolkit for research on general-purpose text understanding models
A Wang, IF Tenney, Y Pruksachatkun, K Yu, J Hula, P Xia, R Pappagari, ...
Note: http://jiant. info/Cited by: footnote 4, 2019
512019
jiant: A software toolkit for research on general-purpose text understanding models
Y Pruksachatkun, P Yeres, H Liu, J Phang, PM Htut, A Wang, I Tenney, ...
arXiv preprint arXiv:2003.02249, 2020
382020
Does robustness improve fairness? approaching fairness with word substitution robustness methods for text classification
Y Pruksachatkun, S Krishna, J Dhamala, R Gupta, KW Chang
arXiv preprint arXiv:2106.10826, 2021
292021
Mitigating gender bias in distilled language models via counterfactual role reversal
U Gupta, J Dhamala, V Kumar, A Verma, Y Pruksachatkun, S Krishna, ...
arXiv preprint arXiv:2203.12574, 2022
282022
Manipulation of search engine results during the 2016 US congressional elections
PT Metaxas, Y Pruksachatkun
Proceedings of the ICIW 6, 2017
252017
CLIP: a dataset for extracting action items for physicians from hospital discharge notes
J Mullenbach, Y Pruksachatkun, S Adler, J Seale, J Swartz, TG McKelvey, ...
arXiv preprint arXiv:2106.02524, 2021
132021
Measuring fairness of text classifiers via prediction sensitivity
S Krishna, R Gupta, A Verma, J Dhamala, Y Pruksachatkun, KW Chang
arXiv preprint arXiv:2203.08670, 2022
72022
Practicing trustworthy machine learning
Y Pruksachatkun, M Mcateer, S Majumdar
" O'Reilly Media, Inc.", 2023
22023
Bloom: A 176b-parameter open-access multilingual language model
BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ...
arXiv preprint arXiv:2211.05100, 2022
22022
Proceedings of the First Workshop on Trustworthy Natural Language Processing
Y Pruksachatkun, A Ramakrishna, KW Chang, S Krishna, J Dhamala, ...
Proceedings of the First Workshop on Trustworthy Natural Language Processing, 2021
12021
Extracting clinical follow-ups from discharge summaries
Y Pruksachatkun, S Adler, TG McKelvey, JL Swartz, H Dai, Y Yang, ...
US Patent 11,861,314, 2024
2024
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
A Ovalle, KW Chang, N Mehrabi, Y Pruksachatkun, A Galystan, ...
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing …, 2023
2023
Leveraging Explicit Procedural Instructions for Data-Efficient Action Prediction
J White, A Raghuvanshi, Y Pruksachatkun
arXiv preprint arXiv:2306.03959, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20