Follow
Angelina Wang
Angelina Wang
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
A Wang, A Narayanan, O Russakovsky
European Conference on Computer Vision, 733-751, 2020
1942020
Learning Robotic Manipulation through Visual Planning and Acting
A Wang, T Kurutach, K Liu, P Abbeel, A Tamar
Robotics: Science and Systems (RSS), 2019
1582019
Understanding and Evaluating Racial Biases in Image Captioning
D Zhao, A Wang, O Russakovsky
International Conference on Computer Vision (ICCV), 2021
1352021
Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
A Wang, VV Ramaswamy, O Russakovsky
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
992022
Directional Bias Amplification
A Wang, O Russakovsky
International Conference on Machine Learning (ICML), 2021
702021
Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy
A Wang, S Kapoor, S Barocas, A Narayanan
ACM Conference on Fairness, Accountability, and Transparency, 2023
662023
Measuring Representational Harms in Image Captioning
A Wang, S Barocas, K Laird, H Wallach
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
512022
Safer Classification By Synthesis
W Wang, A Wang, A Tamar, X Chen, P Abbeel
NeurIPS 2017 Workshop on Aligned Artificial Intelligence, 2017
462017
Taxonomizing and Measuring Representational Harms: A Look at Image Tagging
J Katzman, A Wang, M Scheuerman, SL Blodgett, K Laird, H Wallach, ...
Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI), 2023
372023
Overwriting Pretrained Bias with Finetuning Data
A Wang, O Russakovsky
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
36*2023
The Limits of Global Inclusion in AI Development
A Chan, CT Okolo, Z Terner, A Wang
AAAI 2021 Workshop on Reframing Diversity in AI, 2021
312021
Measuring Implicit Bias in Explicitly Unbiased Large Language Models
X Bai, A Wang, I Sucholutsky, TL Griffiths
arXiv preprint arXiv:2402.04105, 2024
302024
Manipulative tactics are the norm in political emails: Evidence from 100K emails from the 2020 US election cycle
A Mathur, A Wang, C Schwemmer, M Hamin, BM Stewart, A Narayanan
Big Data & Society, 2023
302023
Gender artifacts in visual datasets
N Meister, D Zhao, A Wang, VV Ramaswamy, R Fong, O Russakovsky
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
282023
Large language models cannot replace human participants because they cannot portray identity groups
A Wang, J Morgenstern, JP Dickerson
arXiv preprint arXiv:2402.01908, 2024
262024
Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways
A Wang, X Bai, S Barocas, SL Blodgett
arXiv preprint arXiv:2402.04420, 2024
4*2024
Strategies for Increasing Corporate Responsible AI Prioritization
A Wang, T Datta, JP Dickerson
arXiv preprint arXiv:2405.03855, 2024
22024
Visions of a Discipline: Analyzing Introductory AI Courses on YouTube
S Engelmann, MZ Choksi, A Wang, C Fiesler
The 2024 ACM Conference on Fairness, Accountability, and Transparency, 2400-2420, 2024
12024
Evaluating Generative AI Systems is a Social Science Measurement Challenge
H Wallach, M Desai, N Pangakis, AF Cooper, A Wang, S Barocas, ...
arXiv preprint arXiv:2411.10939, 2024
2024
Benchmark suites instead of leaderboards for evaluating AI fairness
A Wang, A Hertzmann, O Russakovsky
Patterns 5 (11), 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20