Understanding Human Decision Variability and the Effects of AI System Data Modality on Trust and Acceptance in Human–AI Collaboration : A Human-Centred Approach to Designing AI Systems in Healthcare Contexts

Loading...
Thumbnail Image

Advisor

Burns, Catherine

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Effective decision-making is a complex cognitive process that plays a crucial role in high-stakes domains such as healthcare, where inconsistencies in judgment can significantly impact outcomes. It varies widely between individuals, particularly across levels of expertise [Rasmussen, 1983], often leading to variability and inefficiencies [Curran et al., 2022], a phenomenon well-documented in cognitive and decision sciences. As Artificial Intelligence (AI) becomes increasingly integrated into various domains, including finance, transportation, and healthcare, it presents new opportunities to enhance human decision-making. With recent advancements, AI has the potential to mitigate these decision-making challenges, such as inter-user variation, by providing standardized, data-driven recommendations. It can support novice reasoning by guiding decision processes and complement expert intuition when aligned with users’ strengths and limitations [Inkpen et al., 2023]. However, other research has shown that standalone AI systems cannot be fully relied upon due to inherent biases, limited contextual understanding, and an inability to adapt dynamically to the complexities of human decision-making. These limitations necessitate a shift toward human-AI collaborative systems, where AI serves as a complementary tool to enhance human performance. Effective collaboration, however, depends on calibrated trust and high acceptance of AI systems, influenced by numerous factors, including the underexplored effect of AI system data modality on user trust and acceptance. This thesis investigates the variations in decision-making strategies between novices and experts and examines how AI systems can bridge these gaps through human-centric design. It further explores how data modality in AI systems, particularly unimodal versus multimodal data, affects human-AI collaboration. A two-phase study was conducted in the healthcare domain, focusing on glaucoma diagnosis as a specific case study, and employed a mixed-method approach combining qualitative interviews and quantitative evaluations. The first phase examined variations in decision-making strategies between novices and experts. It was found that experts adopted more dynamic and efficient approaches by integrating a wider range of factors, emphasizing progression analysis, identifying complex patterns and correlations, and dynamically balancing positive and negative decision factors based on contextual severity. They demonstrated cognitive efficiency by filtering out extraneous information and prioritizing critical data points. In contrast, novices relied on more structured and analytical methods, often overemphasizing explicit indicators and struggling to balance conflicting evidence. Their decision-making factors also varied significantly across different scenarios. Furthermore, the impact of data availability was evident, with novices being more adversely affected by limited data compared to experts. The second phase evaluated user interactions with unimodal and multimodal AI systems designed for glaucoma diagnosis, measuring trust and acceptance using statistical methods. Multimodal systems consistently outperformed unimodal systems by integrating diverse data sources that mirrored how clinicians process information, leading to better alignment with their workflow. This, in turn, led to significantly higher trust and acceptance for multimodal systems, compared to unimodal systems with significant differences. (p < 0.01). Decision-making performance of optometrists also improved, with multimodal systems achieving higher performance than unimodal systems. An interaction effect between user factors (expertise, gender) and system type was also observed, with notable differences in accuracy and confidence levels. These findings show the need for human-centric AI systems that support novice learning and expert decision-making, leveraging data modalities that align with cognitive processes to foster calibrated trust and high acceptance. By enhancing human-AI collaboration, such systems can improve decision-making consistency and optimize outcomes across diverse contexts. This thesis makes interdisciplinary contributions to human factors, AI, and optometry by investigating cognitive variability in glaucoma diagnosis and examining how data modality influences trust, acceptance in human-AI collaborative environments. In the domain of human factors, it offers insights into how clinical expertise shapes diagnostic reasoning, revealing distinct approaches to data use, heuristic application, and uncertainty management between novices and experts. In the domain of AI, it proposes the design of human-centered AI systems that support optometrists with varying levels of expertise and also evaluates unimodal and multimodal decision-support systems, demonstrating that multimodal systems more closely align with clinicians’ workflows, resulting in higher trust, acceptance, and diagnostic performance. In optometry, the research examines how clinicians interpret glaucoma data, highlighting differences in reasoning strategies such as progression analysis and data prioritization. It informs the development of AI tools that align more closely with optometrists’ decision-making processes and integrate seamlessly into routine clinical workflows. In conclusion, this thesis explores variations in human decision-making and demonstrates the value of human-centric AI systems in supporting both novices and experts. It identifies data modality as a key factor influencing trust and acceptance, providing a rationale for using the same data clinicians rely on to achieve better workflow alignment, leading to higher trust and greater system acceptance. These insights offer a strong foundation for future AI integration in optometry and other high stakes medical domains.

Description

LC Subject Headings

Citation