A User-Centered Design Approach to an Artificial Intelligence-Enabled Electronic Medical Record Encounter in Canadian Primary Care
No Thumbnail Available
Date
2025-05-26
Authors
Advisor
Burns, Catherine
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
Introduction
The Canadian primary care system serves as the first point of contact for patients entering the healthcare system and plays a crucial role in ensuring continuity of care. Primary care clinicians provide a broad range of services, including disease prevention, health promotion, diagnosis, treatment, and coordination of referrals to specialized care.
Despite the widespread adoption of electronic medical records (EMRs) in primary care, their development has historically lagged in addressing the specific needs of primary care providers. For example, chronic disease management (CDM) is a core responsibility in primary care. Yet, EMRs have not evolved to incorporate native, CDM-focused tools that align with the complex, long-term nature of managing chronic conditions. With the emergence of artificial intelligence (AI), there is growing recognition of its potential to enhance clinical decision-making, optimize workflows, and improve patient outcomes. Despite these promising advancements, clinicians remain hesitant to adopt AI due to concerns surrounding trust, usability, and seamless integration into existing workflows.
Primary care is a complex and dynamic environment where clinicians must balance efficiency with patient-centered care, making it critical for AI systems to align with their needs rather than introduce additional cognitive or administrative burdens. Trust in AI is not inherent; rather, it is a critical factor influencing clinicians' willingness to incorporate AI-driven tools into their practice. Without trust, even the most advanced AI systems risk a lack of adoption, as clinicians must have confidence that these technologies will enhance, rather than hinder, patient care. Concerns around reliability, accuracy, and the ability to align with clinical workflows contribute to skepticism, making AI adoption a complex challenge. Clinicians also worry about unintended consequences for both patient outcomes and professional responsibility, fearing that AI-generated recommendations could lead to issues, including diagnostic errors, inappropriate treatments, or a diminished sense of clinical accountability. Additionally, broader ethical and legal uncertainties further complicate integration, as unresolved questions regarding liability, accountability, and regulatory oversight leave clinicians uncertain about the implications of relying on AI-enabled tools embedded in their EMRs. Without clear governance structures and well-defined safeguards, hesitation around AI use in healthcare will persist, underscoring the need for thoughtful design and policy development. This study investigates how a user-centered design approach can address these challenges, ensuring that AI-enabled EMR encounters enhance, rather than disrupt, the clinician’s role in primary care. Through qualitative engagement with practicing primary care clinicians in Ontario, this research explores the design considerations necessary to address the various concerns that may hinder the adoption of AI-enabled tools in primary care clinical interactions.
Methods
This research employed a user-centered design approach, incorporating a two-phase semi-structured qualitative interview process with 14 primary care clinicians practicing in Ontario. In the first phase, scenario-based interviews were conducted in which clinicians interacted with a standard EMR encounter module, allowing for the development of an initial sequence model that mapped out the typical workflow clinicians follow during patient encounters.
In the second phase, clinicians engaged with AI-enabled mock-ups with the same scenarios as the first. This mock-up included an AI-generated summary encompassing a numeric confidence score, the concept of approve/edit/decline buttons, and underlined actionable steps like selecting medications. Clinicians provided feedback on the AI-enabled EMR encounter mock-ups, highlighting aspects they liked, elements they found distracting, areas where they had reservations about AI inclusion, and other general sentiments. A thematic analysis was conducted from the qualitative interview transcripts, which informed iterative redesigns of the AI-enabled interface.
These themes were further explored with the concepts of ease of use and usefulness through the Technology Acceptance Model (TAM), using these concepts to structure design requirements for revised mock-ups. A second round of semi-structured validation interviews was conducted to assess the effectiveness of the redesigned AI-enabled EMR encounter, with results tabulated to capture clinician feedback. Additionally, a System Usability Scale (SUS) was administered to quantify clinicians' perceptions of the redesigned interface, providing a standardized measure of its usability and potential for adoption in primary care settings.
Results
The initial sequence model was developed to map out a clinician's typical workflow during a clinical encounter, providing a structured understanding of key interactions from patient intake and history-taking to diagnosis, treatment planning, and documentation. This model highlighted key areas in the workflow where AI could support, rather than disrupt, established workflows by aligning with real-world clinician behavior. It also helped identify inefficiencies and moments where AI could provide meaningful assistance without diminishing clinician autonomy.
Through thematic analysis, six key themes were identified, contributing to a greater understanding of how AI-enabled EMR encounters can be designed for adoption, including trust and transparency, clinician authority, workflow efficiency, complexity sensitivity, legal and ethical concerns, and human-centered design.
Clinicians approach AI with skepticism, expecting trust to be earned through high performance and transparency. They prefer AI that reinforces their authority by providing relevant, non-intrusive support rather than acting autonomously. Trust is further influenced by clarity in AI-generated confidence scores and the ability to audit recommendations. Maintaining clinician decision-making authority is essential.
AI is viewed as most beneficial when offering supportive suggestions, such as reminders or preventative care guidelines, while avoiding directive or overly prescriptive actions.
AI adoption also depends on complexity sensitivity, where clinicians favor AI-assisted recommendations in straightforward cases like uncomplicated urinary tract infections but remain cautious in complex, nuanced scenarios such as mental health visits, uncovering a potential pathway for further exploration.
Workflow efficiency and seamless integration into clinical practice are critical, with clinicians expressing their desire for AI-generated summaries that are concise, patient-centered, and structured to minimize cognitive burden. Implementing unnecessary steps or disrupting existing workflows is seen as a barrier to adoption, as clinicians emphasize their time constraints and high-volume workload, requiring minimal disruption and patient care maintaining priority.
Legal and ethical concerns also play a significant role, with clinicians expecting clarity. Clinicians express delineation between AI-generated and clinician-modified content and well-defined policies on liability auditability.
Finally, human-centered design remains paramount, with clinicians emphasizing that AI should enhance rather than replace the clinician-patient relationship. AI should be positioned to operate unobtrusively in the background, ensuring efficiency without diminishing clinical judgment or disrupting the clinician and patient interaction during a visit.
These findings directly informed the redesigned screens, with ease of use and usefulness from the Technology Acceptance Model (TAM) serving as guides for the redesign requirements. An "AI Assist" button was introduced, allowing clinicians to engage AI support at their discretion rather than imposing AI-driven suggestions on their workflow. This reinforced clinician autonomy and minimized cognitive burden. Additionally, color-coded delineations were implemented to clearly distinguish AI-generated content from clinician-entered data, ensuring transparency while maintaining workflow efficiency. An optional descriptive confidence score feature was incorporated, addressing varying clinician preferences by allowing them to toggle the feature on or off as needed. The language in the AI-generated summary was also adjusted to align with a more supportive narrative in contrast to an overly prescriptive tone.
These insights and the thematic analysis results guided the development of redesigned screens, positioning AI to better align with clinicians’ preferences and expectations. The redesigned screens were completed in a way that attempted to reinforce trust, reduce friction between clinicians and AI, maintain flexibility, and reinforce a clinician’s role as the decision-maker.
Validation interviews indicated that these design modifications potentially improved usability by positioning AI in the EMR encounter to better align with clinicians’ preferences and expectations, as informed by the thematic analysis, ultimately increasing the likelihood of adoption.
Conclusions
AI adoption in primary care depends on thoughtful design prioritizing reinforced clinician autonomy, high performance, and seamless workflow integration. While clinicians recognize AI's potential benefits, they remain cautious about unintended consequences. Designing AI as a supportive, rather than directive, tool is key to fostering trust and improving adoption. Future research should focus on real-world implementation, longitudinal studies on AI adoption, and regulatory frameworks that address liability and ethical considerations.
Contributions
This study contributes to the academic and practical understanding of integrating AI-enabled clinical tools in Canadian primary care EMRs. From a scientific perspective, it advances knowledge on the barriers and facilitators of AI adoption in EMRs, identifying key design principles that can encourage adoption.
From a practical standpoint, this study provides concrete design recommendations that can inform AI developers, EMR vendors, and healthcare policymakers. Additionally, this study contributes to the legal and ethical discourse surrounding AI in healthcare by highlighting unresolved questions regarding liability, data privacy, and patient consent. The findings call for regulatory frameworks that protect clinicians from undue legal risks while ensuring that AI systems are accountable, interpretable, and aligned with best medical practices.
Finally, this research offers a framework for future AI integration, emphasizing the need for context-sensitive AI models that adapt to case complexity. Through these contributions, this research informs the potential for responsible design and deployment of AI in primary care, ensuring that technology is a sustainable partner in healthcare delivery.
Description
Keywords
artificial intelligence, electronic medical record, primary care