I want to normalize expertise-driven approaches to AI–that is,
AI design, evaluation, and governance approaches grounded in an understanding of human expertise, including how it is (and should be)
conceptualized, measured, and accounted-for with respect to AI capabilities.
Towards this, I use mixed methods to study the design, use, and impacts
of AI tools in complex, real-world domains like social work.
While addressing model- and interface-level design challenges,
I aim to consider the impacts of and on organizational structures and incentives to contextualize our understanding
of what expertise-driven AI might look like in practice, and how to design tools, processes, and policies that help us get there.
I care deeply about extending research to have non-negative real-world impact. My aspiration is to produce knowledge
and approaches that can easily be transferred to support relevant community members, policymakers, technology practitioners, and researchers.
Findings from my recent research has contributed to national and state-level policy efforts surrounding the use of algorithms in the public sector, and has prompted relevant public discourse through news coverage by AP News, NPR, PBS News, and others.
My ongoing research re-formulates measurement in AI as a participatory design problem, and
explores how to build systems and processes that support it. I'd love to talk with anyone interested in this or related topics!
I am a fourth-year PhD student in the Human-Computer Interaction Institute at Carnegie Mellon University's School of Computer Science, where I am fortunate to be co-advised by Ken Holstein (CoALA Lab) and Haiyi Zhu (Social AI Lab). I am a NSF GRFP Fellow, K&L Gates Presidential Fellow, and CASMI PhD Fellow. Before starting my PhD, I was at Wellesley College, a historically women's liberal arts college, where I graduated with a BA in Computer Science (with honors) and was awarded the Trustee Scholarship and Academic Excellence Award in Computer Science.
In prior summers, I was a research intern at Microsoft Research FATE (Fairness, Accountability, Transparency, and Ethics in AI) NYC (2023) and Montréal (2022). During my undergraduate years, I also contributed to Microsoft's Responsible AI efforts as an undergraduate research intern at Microsoft Research Aether (AI Ethics and Effects in Engineering and Research). I also did research and studied abroad at Oxford University.
You can find more information in my CV.
“AI Failure Loops in Feminized Labor: Understanding the Interplay of Workplace AI and Occupational Devaluation”
Anna Kawakami, Jordan Taylor, Sarah Fox, Haiyi Zhu, and Kenneth Holstein.
To appear in AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2024. Non-archival.
“Do Responsible AI Artifacts Help Advance Stakeholder Goals? Perspectives from Regulatory and Civil Perspectives”
Anna Kawakami, Daricia Wilkinson, and Alex Chouldechova.
To appear in AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2024.
“Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use”
Anna Kawakami, Amanda Coston, Hoda Heidari, Kenneth Holstein, and Haiyi Zhu..
In ACM SIGCHI Conference on Computer-Supportive Cooperative Work & Social Computing, CSCW 2024.
Also in EAAMO 2023 (non-archival). [preprint]
“The Situate AI Guidebook:
Co-Designing a Toolkit to Support Multi-Stakeholder, Early-stage Deliberations Around Public Sector AI Proposals”
Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, and Kenneth Holstein.
In ACM Conference on Human Factors in Computing Systems, CHI 2024. [paper]
[slides]
[tweet]
“Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge”
Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Kate Glazko, Matthew Lee, Scott Carter, Nikos Arechiga, Haiyi Zhu and Kenneth Holstein.
In ACM Conference on Collective Intelligence, CI 2023. [paper]
“Sensing Wellbeing in the Workplace, Why and For Whom? Envisioning Impacts with Organizational Stakeholders”
Anna Kawakami, Shreya Chowdhary, Shamsi T. Iqbal, Q. Vera Liao, Alexandra Olteanu, Jina Suh, Koustuv Saha.
InACM SIGCHI Conference on Computer-Supportive Cooperative Work & Social Computing, CSCW 2023.
[paper]
[tweet]
“Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?”
Shreya Chowdhary, Anna Kawakami, Mary L. Gray, Jina Suh, Alexandra Olteanu, Koustuv Saha.
In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023.
[paper]
[video]
[tweet]
“A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms”
Amanda Coston, Anna Kawakami, Haiyi Zhu, Kenneth Holstein, Hoda Heidari.
In IEEE Conference on Secure and Trustworthy Machine Learning, IEEE SaTML 2023.
[paper]
[video]
Best Paper Award (Top 1%)
“'Why Do I Care What's Similar?' Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts”
Anna Kawakami*, Venkat Sivaraman*, Logan Stapleton, Hao-Fei Cheng, Adam Perer, Steven Wu, Haiyi Zhu, Kenneth Holstein.
In ACM Conference on Designing Interactive Systems, DIS 2022.
[paper]
[video]
[preview]
“Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support.”
Anna Kawakami, Venkat Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yang Cheng, Diana Qing, Adam Perer, Steven Wu, Haiyi Zhu, Kenneth Holstein.
In ACM Conference on Human Factors in Computing Systems, CHI 2022.
[paper]
[video]
[preview]
Best Paper Honorable Mention Award (Top 5%)
“How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions.”
Hao-Fei Cheng*, Logan Stapleton*, Anna Kawakami, Venkat Sivaraman, Yang Cheng, Diana Qing, Adam Perer, Steven Wu, Haiyi Zhu, Kenneth Holstein.
In ACM Conference on Human Factors in Computing Systems, CHI 2022.
[paper]
[video]
“The 'Fairness Doctrine' lives on? Theorizing about the Algorithmic News Curation of Google's Top Stories”
Anna Kawakami, Khonzoda Umarova, Jennifer Huang, Eni Mustafaraj.
In ACM Conference on Hypertext and Social Media, HT 2020.
[paper]
“The Media Coverage of the 2020 US Presidential Election Candidates through the Lens of Google’s Top Stories.”
Anna Kawakami, Khonzoda Umarova, Eni Mustafaraj.
In International AAAI Conference on Web and Social Media, ICWSM 2020.
[paper]
“Privacy and Activism in the Transgender Community.”
Ada Lerner, Helen He, Anna Kawakami, Silvia Zeamer, Roberto Hoyle.
In ACM Conference on Human Factors in Computing Systems, CHI 2020.
[paper]
“How Risky are Real Users’ IFTTT Applets?”
Camille Cobb, Milijana Surbatovich, Anna Kawakami, Mahmood Sharif, Lujo Bauer, Limin Jia.
In Sixteenth Symposium on Usable Security and Privacy, SOUPS 2020.
[paper]
“Labor, Visbility, and Technology: Weaving Together Academic Insights and On-Ground Realities” Joy Ming, Lucy Pei, Devansh Saxena, Rama Adithya Varanasi, Anna Kawakami, Nervo Verdezoto, EunJeong Cheon. Workshop at the ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2024. [preprint] [website]
“Community-driven AI: Empowering people through responsible data-driven decision-making.” Ruyuan Wan, Adriana Alvarado Garcia, Devansh Saxena, Catalina Vajiac, Anna Kawakami, Logan Stapleton, Kenneth Holstein, Heloisa Candello, Karla Badillo-Urquiola. Workshop at the ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2023. [paper] [website]
“Who Has an Interest in 'Public Interest Technology'?
Critical Questions for Working with Local Governments & Impacted Communities.”
Logan Stapleton, Devansh Saxena, Anna Kawakami, Tonya Nguyen, Asbjørn Ammitzbøll Flügge, Motahhare Eslami,
Naja Holten Møller, Min Kyung Lee, Shion Guha, Kenneth Holstein, Haiyi Zhu.
Workshop at the ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2022.
[paper]
[website]
“Towards Successful Deployment of Wellbeing Sensing Technologies: Identifying Misalignments across Contextual Boundaries.”
Jina Suh, Javier Hernandez Rivera, Koustuv Saha, Kathy Dixon, Mehrab Bin Morshed, Esther Howe, Anna Kawakami, Mary Czerwinski.
In Workshop on Affective Computing for Mental Wellbeing at the Conference of the Association for the Advancement of Affective Computings, ACII 2023.
[short paper]
“Recentering Validity Considerations through Early-Stage Deliberations Around AI and Policy Design”
Anna Kawakami, Amanda Costosn, Haiyi Zhu, Hoda Heidari, Kenneth Holstein.
In Workshop on Designing Policy and Technology Simultaneously at the ACM Conference on Human Factors in Computing Systems, CHI 2023.
[short paper]
“A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms.”
Amanda Coston, Anna Kawakami, Haiyi Zhu, Kenneth Holstein, Hoda Heidari.
In ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO 2022.
[preprint]
“Towards a Learner-Centered Explainable AI.”
Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Anita Sun, Alison Hu, Kate Glazko, Nikos Arechiga,
Matthew Lee, Scott Carter, Haiyi Zhu and Kenneth Holstein.
In Workshop on Human-Centered Explainable AI (HCXAI) at the ACM Conference on
Human Factors in Computing Systems, CHI 2022.
[short paper]
“AI Reliability & Safety: Practices and Challenges at Microsoft.”
Anna Kawakami, Mihaela Vorvoreanu, Ben Zorn, Nathan Liles, Ece Kamar.
Microsoft-internal technical report, 2021.
“The News We See When Searching: Exploring Users’ Mental Models of Google’s Top Stories.”
Anna Kawakami. Senior honors thesis, 2021.
I didn’t follow this exactly but it was a good frame of reference. Also many HCI / I-School and CS PhD programs removed the GRE requirement in the past year. It could make sense to make a list of programs you'd definitely be excited to attend before studying for the GRE (maybe you'll find you don't need to take it, like me!).
Every lab is different but this guide is spot-on for Eni's Wellesley Cred Lab. I like to read through the common scenarios for validation.
I especially like the sections on Ideas and Doing research. He also links an awesome PhD meeting agenda template that I ~aspire~ to follow.
If you're interested in Carnegie Mellon's HCI PhD program (or any other CS-related PhD program at CMU), you could get feedback on your application materials from a wonderful team of PhD student volunteers: Graduate Application Support Program.