Mansplainable AI: Investigating Patronizing Language in Generative AI Chatbots

dc.contributor.advisorHancock, Mark
dc.contributor.advisorMacArthur, Cayley
dc.contributor.authorNova, Natalie
dc.date.accessioned2024-12-12T14:28:44Z
dc.date.available2024-12-12T14:28:44Z
dc.date.issued2024-12-12
dc.date.submitted2024-12-06
dc.description.abstractAs generative AI systems become increasingly prevalent in human communication, problem-solving, and overall workflow, the nature of their text responses raises important questions about explanation and interpretation. Feminist literature critiques the concept of explanation, suggesting that it can be perceived as condescending, and can manifest as a form of “mansplaining.” This thesis interrogates the reception of AI-generated explanations, focusing specifically on how gender and perceived communication style influence user perceptions. We conducted a study utilizing three distinct OpenAI chatbots—Mansplaining, Default, and Compassionate—each designed with different built-in prompts, in a sentiment analysis task involving 108 participants. My findings reveal significant differences in how these chatbots are perceived. The mansplaining chatbot was consistently viewed as more dominant, patronizing, and unfriendly, while it was rated lower on respect, consideration, warmth, and supportiveness. Notably, it was perceived by participants, particularly women, as believing it possessed greater knowledge and expertise than them, leading to feelings of inadequacy regarding their competence and experience. In contrast, the default chatbot was recognized as less considerate than the compassionate chatbot, yet women perceived the default chatbot as exhibiting more confidence compared to men’s perception. I analyzed the non-binary participants separately to observe their perceptions. Finally, I examined comments from 46 participants, which revealed patterns that aligned closely with the quantitative results, further substantiating the findings. These results underscore the critical impact of communication styles in generative AI explanations on user experiences, particularly through the lens of gender dynamics. With these findings, this thesis aims to promote the design of more equitable and empathetic AI systems that account for sociotechnical factors. I advocate for a re-evaluation of AI explanation frameworks, emphasizing the need for designs that foster inclusivity, respect, and understanding in human-AI interactions.
dc.identifier.urihttps://hdl.handle.net/10012/21232
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.relation.urihttps://github.com/waterloo-touchlab/Chatbots/tree/main/study
dc.subjecthuman-computer interaction
dc.subjectuser studies
dc.subjectgenerative AI
dc.subjectexplanations
dc.subjectmansplaining
dc.subjectresponsible AI
dc.titleMansplainable AI: Investigating Patronizing Language in Generative AI Chatbots
dc.typeMaster Thesis
uws-etd.degreeMaster of Applied Science
uws-etd.degree.departmentManagement Sciences
uws-etd.degree.disciplineManagement Sciences
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0
uws.contributor.advisorHancock, Mark
uws.contributor.advisorMacArthur, Cayley
uws.contributor.affiliation1Faculty of Engineering
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Nova_Natalie.pdf
Size:
1000.42 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections