I'm an HCI researcher and Ph.D. candidate at the Berkeley School of Information, advised by Professor Niloufar Salehi.
My research sits at the intersection of HCI, responsible AI, public interest technlogy, and social computing. My PhD focuses on how we can better align sociotechnical systems with human values (e.g., developing better measurement frameworks, operationalizing values, bridging thick description with technical implementation). Methodologically, I combine mixed-methods, controlled experiments, design research, and systems building in my work.
My research career began at UC Berkeley, where I designed and completed an interdisciplinary major in Computer Science, Interaction Design, and Critical Theory.
Since then, I have collaborated with
Jenn Wortman Vaughan and
Jean Garcia-Gathright at Microsoft Research
(FATE/Sociotechnical Alignment Center);
Michael Bernstein and
Mark Whiting at
Stanford HCI;
and Björn Hartmann at the
Berkeley Institute of Design,
on projects spanning the future of work, human–AI collaboration, and AI evaluation.
Outside of work, I think a lot about theories of change, relationality, and the poetics/politics of diaspora. I enjoy yoga, snowboarding, logic puzzles, wandering through museums, and being outside all day whenever I can get away with it.
Tonya Nguyen, Jean Garcia-Gathright, Hanna Washington, Alex Chouldechova, Hanna Wallach, Jennifer Wortman Vaughan
▾ abstract
Introduces a scalable participatory methodology for validating generative AI evaluation criteria with diverse users, demonstrating how structured stakeholder engagement improves measurement rigor and surfaces gaps in automated quality assessments.
In submission
Tonya Nguyen, Serene Cheon, Liza Gak, Cathy Hu, Catherine Albiston, Niloufar Salehi
▾ abstract
Examines how explainable AI and algorithmic framings intersect with people's personal values on redistributing public resources. We conduct an interview study with parents who undergo algorithmic school assignment, and an experimental study where we test how different explanations impact people's perception of the algorithm.
In submission
▾ abstract
Social movement organizations, such as mutual aid groups, rely on technology to increase their influence, meet immediate needs, and address systemic inequalities. In this paper, we examine the role of technology in moments of crisis and the tensions mutual aid groups face when relying on tools designed with values that may be antithetical to their own. Through a qualitative study with mutual aid volunteers in the United States, we found that mutual aid groups' values, such as solidarity, security, and co-production, are prioritized as they navigate adopting technology. However, while technology can streamline logistics and enhance visibility for mutual aid groups, we argue that the adoption of existing technologies and conventions of practice can erode opportunities for building solidarity, present challenges for accountability, and exacerbate pre-existing social exclusions. We argue that these tensions emerge not simply as a mismatch between values and technical design, but as systematic outcomes of adopting tools that embed different political assumptions and points of access. Our findings contribute to understanding how values shape — and are shaped by — technological infrastructure in mutual aid work.
▾ abstract
Understanding how people perceive algorithmic decision-making remains critical, as these systems are increasingly integrated into areas such as education, healthcare, and criminal justice. These perceptions can shape trust in, compliance with, and the perceived legitimacy of automated systems. Focusing on San Francisco's decade-long policy of algorithmic school assignments, we draw on procedural and distributive justice theory to investigate parents' fairness perceptions of San Francisco's school assignment system. We find that key differences in parents' definitions of fairness shape their preferences for what constitutes a fair school assignment system, while also controlling for parents' school assignment outcomes. Moreover, parents' definitions and perceptions of fairness differ across socioeconomic and racial groups. For instance, among white respondents, the most used definition of fairness was "proximity" to their assigned school, whereas, among Hispanic or Latino parents, the most popular definition of fairness was that the "same rules" are applied to everyone. It is crucial for computational system designers and policymakers to consider these differences when deciding on the goals and values embedded in decision-making systems and who those goals and values reflect.
▾ abstract
Emerging methods for participatory algorithm design have proposed collecting and aggregating individual stakeholders' preferences to create algorithmic systems that account for those stakeholders' values. Drawing on two years of research across two public school districts in the United States, we study how families and school districts use students' preferences for schools to meet their goals in the context of algorithmic student assignment systems. We find that the design of the preference language, i.e. the structure in which participants must express their needs and goals to the decision-maker, shapes the opportunities for meaningful participation. We define three properties of preference languages – expressiveness, cost, and collectivism – and discuss how these factors shape who is able to participate, and the extent to which they are able to effectively communicate their needs to the decision-maker. Reflecting on these findings, we offer implications and paths forward for researchers and practitioners who are considering applying a preference-based model for participation in algorithmic decision making.
▾ abstract
Presenters often collect audience feedback through practice talks to refine their presentations. In formative interviews, we find that although text feedback and verbal discussions allow presenters to receive feedback, organizing that feedback into actionable presentation revisions remains challenging. Feedback may lack context, be redundant, and be spread across various emails, notes, and conversations. To collate and contextualize both text and verbal feedback, we present SlideSpecs. SlideSpecs lets audience members provide text feedback (e.g., 'font too small') while attaching an automatically detected context, including relevant slides (e.g., 'Slide 7') or content tags (e.g., 'slide design'). SlideSpecs also records and transcribes spoken group discussions that commonly occur after practice talks and facilitates linking text critiques to relevant discussion segments. Finally, presenters can use SlideSpecs to review all text and spoken feedback in a single contextually rich interface (e.g., relevant slides, topics, and follow-up discussions). We demonstrate the effectiveness of SlideSpecs by deploying it in eight practice talks with a range of topics and purposes and reporting our findings.
▾ abstract
Public school districts across the United States have implemented school choice systems that have the potential to improve underserved students' access to educational opportunities. However, research has shown that learning about and applying for schools can be extremely time-consuming and expensive, making it difficult for these systems to create more equitable access to resources in practice. A common factor surfaced in prior work is unequal access to information about the schools and enrollment process. In response, governments and non-profits have invested in providing more information about schools to parents, for instance, through detailed online dashboards. However, we know little about what information is actually useful for historically marginalized and underserved families. We conducted interviews with 10 low-income families and families of color to learn about the challenges they faced navigating an online school choice and enrollment system. We complement this data with four interviews with people who have supported families through the enrollment process in a wide range of roles, from school principal to non-profit staff ("parent advocates"). Our findings highlight the value of personalized support and trusting relationships to delivering relevant and helpful information. We contrast this against online information resources and dashboards, which tend to be impersonal, target a broad audience, and make strong assumptions about what parents should look for in a school without sensitivity to families' varying circumstances. We advocate for an assets-based design approach to information support in public school enrollment, which would ask how we can support the local, one-on-one support that community members already provide.
▾ abstract
Across the United States, a growing number of school districts are turning to matching algorithms to assign students to public schools. The designers of these algorithms aimed to promote values such as transparency, equity, and community in the process. However, school districts have encountered practical challenges in their deployment. In fact, San Francisco Unified School District voted to stop using and completely redesign their student assignment algorithm because it was frustrating for families and it was not promoting educational equity in practice. We analyze this system using a Value Sensitive Design approach and find that one reason values are not met in practice is that the system relies on modeling assumptions about families' priorities, constraints, and goals that clash with the real world. These assumptions overlook the complex barriers to ideal participation that many families face, particularly because of socioeconomic inequalities. We argue that direct, ongoing engagement with stakeholders is central to aligning algorithmic values with real world conditions. In doing so we must broaden how we evaluate algorithms while recognizing the limitations of purely algorithmic solutions in addressing complex socio-political problems.
▾ abstract
This paper presents Timelines, a design activity to assist values advocates: people who help others recognize values and ethical concerns as relevant to technical practice. Rather than integrate seamlessly into existing design processes, Timelines aims to create a space for critical reflection and contestation among expert participants (such as technology researchers, practitioners, or students) and a values advocate facilitator to surface the importance and relevance of values and ethical concerns. The activity's design is motivated by theoretical perspectives from design fiction, scenario planning, and value sensitive design. The activity helps participants surface discussion of broad societal-level changes related to a technology by creating stories from news headlines, and recognize a diversity of experiences situated in the everyday by creating social media posts from different viewpoints. We reflect on how decisions on the activity's design and facilitation enables it to assist in values advocacy practices.
▾ abstract
The use of conversational AI has become more ubiquitous in public and private life. Although conversational artificial intelligence (AI) may be useful and seamless, using voice to mediate human-AI interaction raises potential privacy risks and socio-cultural implications for gender bias and childhood development. To better understand the downstream implications of AI, we pose a set of questions to frame how conversational AI and various stakeholders may interact within sociotechnical systems.
#FairArtificialIntelligence
▾ abstract
A team's early interactions are influential: small behaviors cascade, driving the team either toward successful collaboration or toward fracture. Would a team be more viable if it could undo initial interactional missteps and try again? We introduce a technique that supports online and remote teams in creating multiple parallel worlds: the same team meets many times, led to believe that each convening is with a new team due to pseudonym masking while actual membership remains static. Afterward, the team moves forward with the parallel world with the highest viability by using the same pseudonyms and conversation history from that instance. In two experiments, we find that this technique improves team viability: teams that are reconvened from the highest-viability parallel world are significantly more viable than the same group meeting in a new parallel world. Our work suggests parallel worlds can help teams start off on the right foot — and stay there.
▾ abstract
Listening and speaking are tied to human experiences of closeness and trust. As voice interfaces gain mainstream popularity, we ask: is our relationship with technology that speaks with us fundamentally different from technology we use to read and type? In particular, will we disclose more about ourselves to computers that speak to us and listen to our answers? We examine this question through a controlled experiment where a conversational agent asked participants closeness-generating questions common in social psychology through either text-based, voice-based, or mixed interfaces. We found that people skipped fewer invasive questions when listening and speaking compared to other conditions; the effect was slightly larger when the computer had a male voice; and people skipped more frequently as questions became more invasive. This research has implications for the future design of conversational agents and introduces important new factors in concerns for user privacy.
My—non-research related—projects.
Dérivé is a dynamic mode of discovery that lights the path to an adventure. My roles in this project included prototyping, wireframing, Arduino programming, and mobile app product design.
Calm.io is an interactive bracelet that mediates grounding practices for relaxation and bodily awareness to ground users during stressful times. My roles in this project included literature review, programming, industrial design, and prototyping.
Heuristic Algorithm is an exploration of hidden mechanisms with the visual metaphor of a black box. My roles in this project included concept generation, programming, and prototyping.