Publications
A selection of my publications. Full list on Google Scholar.
The effects of example-based explanations in a machine learning interface
Carrie J Cai, Jonas Jongejan, Jess Holbrook
IUI '19: Proceedings of the 24th International Conference on Intelligent User Interfaces (2019)
The black-box nature of machine learning algorithms can make their predictions difficult to understand and explain to end-users. In this paper, we propose and evaluate two kinds of example-based explanations in the visual domain, normative explanations and comparative explanations (Figure 1), which automatically surface examples from the training set of a deep neural net sketch-recognition algorithm. To investigate their effects, we deployed these explanations to 1150 users on QuickDraw, an online platform where users draw images and see whether a recognizer has correctly guessed the intended drawing. When the algorithm failed to recognize the drawing, those who received normative explanations felt they had a better understanding of the system, and perceived the system to have higher capability. However, comparative explanations did not always improve perceptions of the algorithm, possibly because they sometimes exposed limitations of the algorithm and may have led to surprise. These findings suggest that examples can serve as a vehicle for explaining algorithmic behavior, but point to relative advantages and disadvantages of using different kinds of examples, depending on the goal.
Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality.
Bertram F Malle, Jess Holbrook
Journal of Personality and Social Psychology (2012)
People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences—from intentionality and desire to belief to personality—that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research.
A systematic review and thematic analysis of community-collaborative approaches to computing research
Ned Cooper, Tiffanie Horne, Gillian R Hayes, Courtney Heldreth, Michal Lahav, Jess Holbrook, Lauren Wilcox
Proceedings of the 2022 CHI conference on human factors in computing systems (2022)
HCI researchers have been gradually shifting attention from individual users to communities when engaging in research, design, and system development. However, our field has yet to establish a cohesive, systematic understanding of the challenges, benefits, and commitments of community-collaborative approaches to research. We conducted a systematic review and thematic analysis of 47 computing research papers discussing participatory research with communities for the development of technological artifacts and systems, published over the last two decades. From this review, we identified seven themes associated with the evolution of a project: from establishing community partnerships to sustaining results. Our findings suggest that several tensions characterize these projects, many of which relate to the power and position of researchers, and the computing research environment, relative to community partners. We discuss the implications of our findings and offer methodological proposals to guide HCI, and computing research more broadly, towards practices that center communities.
Unmet needs and opportunities for mobile translation AI
Daniel J Liebling, Michal Lahav, Abigail Evans, Aaron Donsbach, Jess Holbrook, Boris Smus, Lindsey Boran
Proceedings of the 2020 CHI conference on human factors in computing (2020)
Translation apps and devices are often presented in the context of providing assistance while traveling abroad. However, the spectrum of needs for cross-language communication is much wider. To investigate these needs, we conducted three studies with populations spanning socioeconomic status and geographic regions: (1) United States-based travelers, (2) migrant workers in India, and (3) immigrant populations in the United States. We compare frequent travelers' perception and actual translation needs with those of the two migrant communities. The latter two, with low language proficiency, have the greatest translation needs to navigate their daily lives. However, current mobile translation apps do not meet these needs. Our findings provide new insights on the usage practices and limitations of mobile translation tools. Finally, we propose design implications to help apps better serve these unmet needs.
Human-centered responsible artificial intelligence: Current & future trends
Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (2023)
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence. While different research communities may use different terminology to discuss similar topics, all of this work is ultimately aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI. In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends to advance this important area of research by fostering collaboration and sharing ideas.
Identifying the intersections: User experience + research scientist collaboration in a generative machine learning interface
Claire Kayacik, Sherol Chen, Signe Noerly, Jess Holbrook, Adam Roberts, Douglas Eck
CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (2019)
Creative generative machine learning interfaces are stronger when multiple actors bearing different points of view actively contribute to them. User experience (UX) research and design involvement in the creation of machine learning (ML) models help ML research scientists to more effectively identify human needs that ML models will fulfill. The People and AI Research (PAIR) group within Google developed a novel program method in which UXers are embedded into an ML research group for three months to provide a human-centered perspective on the creation of ML models. The first full-time cohort of UXers were embedded in a team of ML research scientists focused on deep generative models to assist in music composition. Here, we discuss the structure and goals of the program, challenges we faced during execution, and insights gained as a result of the process. We offer practical suggestions for how to foster communication between UX and ML research teams and recommended UX design processes for building creative generative machine learning interfaces.
The design space of pre-trained models
Meredith Ringel Morris, Carrie J Cai, Jess Holbrook, Chinmay Kulkarni, Michael Terry
HCAI@NeurIPS 2022, Human Centered AI (2023)
Card et al.'s classic paper "The Design Space of Input Devices" established the value of design spaces as a tool for HCI analysis and invention. We posit that developing design spaces for emerging pre-trained, generative AI models is necessary for supporting their integration into human-centered systems and practices. We explore what it means to develop an AI model design space by proposing two design spaces relating to generative AI models: the first considers how HCI can impact generative models (i.e., interfaces for models) and the second considers how generative models can impact HCI (i.e., models as an HCI prototyping material).
What does AI mean for smallholder farmers? A proposal for farmer-centered AI research
Courtney Heldreth, Diana Akrong, Jess Holbrook, Norman Makoto Su
Interactions (2021)
AI offers opportunities to solve complex problems facing smallholder farmers in the Global South. However, there is a dearth of research and resources available to organizations and policymakers for building farmer-centered AI systems.→ We propose concrete future directions for building AI solutions and tools that are meaningful to farmers and will significantly improve their lives. We also discuss tensions that may arise when incorporating AI into farming ecosystems.
Designing (with) AI for wellbeing
Dimitra Dritsa, Loes Van Renswouw, Sara Colombo, Kaisa Väänänen, Sander Bogers, Arian Martinez, Jess Holbrook, Aarnout Brombacher
Extended abstracts of the CHI conference on human factors in computing systems (2024)
Designing with data and Artificial Intelligence (AI) can bring significant value to the development of systems and technologies that promote personal wellbeing. However, there are also unaddressed challenges and risks connected to designing (with) AI for wellbeing, such as the difficulties in ensuring that the generated feedback or proposed interventions are relevant considering the large interpersonal variations between the current, desired and achievable level of physical and mental wellbeing of different individuals. In this one-day hybrid workshop, we aim to bring together design and HCI researchers and practitioners interested in the intersection of design, AI, and wellbeing beyond clinical applications. We will discuss challenges in designing with AI for wellbeing originating from a) the domains of design and b) general issues in developing AI systems, and uncover new potential directions that emerge when coupling design, AI and wellbeing. Our aim is to bring together researchers and practitioners from various fields and backgrounds who use data and AI when designing for wellbeing. Through this workshop, we aim to create a conceptual framework that enables the emergence of rich, meaningful, and ethical solutions for designing (with) AI for wellbeing, while also providing handles to mitigate the emergence of negative consequences.
Conference presentations
- Workshop on Responsible Data, CVPR, June 2024
- Designing (with) AI for Wellbeing, CHI, May 2024