Mansplainable AI: Investigating Patronizing Language in Generative AI Chatbots
Loading...
Date
2024-12-12
Authors
Advisor
Hancock, Mark
MacArthur, Cayley
MacArthur, Cayley
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
As generative AI systems become increasingly prevalent in human communication, problem-solving, and overall workflow, the nature of their text responses raises important questions about explanation and interpretation. Feminist literature critiques the concept of explanation, suggesting that it can be perceived as condescending, and can manifest as a form of “mansplaining.” This thesis interrogates the reception of AI-generated explanations, focusing specifically on how gender and perceived communication style influence user perceptions. We conducted a study utilizing three distinct OpenAI chatbots—Mansplaining, Default, and Compassionate—each designed with different built-in prompts, in a sentiment analysis task involving 108 participants. My findings reveal significant differences in how these chatbots are perceived. The mansplaining chatbot was consistently viewed as more dominant, patronizing, and unfriendly, while it was rated lower on respect, consideration, warmth, and supportiveness. Notably, it was perceived by participants, particularly women, as believing it possessed greater knowledge and expertise than them, leading to feelings of inadequacy regarding their competence and experience. In contrast, the default chatbot was recognized as less considerate than the compassionate chatbot, yet women perceived the default chatbot as exhibiting more confidence compared to men’s perception. I analyzed the non-binary participants separately to observe their perceptions. Finally, I examined comments from 46 participants, which revealed patterns that aligned closely with the quantitative results, further substantiating the findings. These results underscore the critical impact of communication styles in generative AI explanations on user experiences, particularly through the lens of gender dynamics. With these findings, this thesis aims to promote the design of more equitable and empathetic AI systems that account for sociotechnical factors. I advocate for a re-evaluation of AI explanation frameworks, emphasizing the need for designs that foster inclusivity, respect, and understanding in human-AI interactions.
Description
Keywords
human-computer interaction, user studies, generative AI, explanations, mansplaining, responsible AI