Understanding the Crucial Role of Trust When Implementing AI in Healthcare: A Scoping Review
Artificial intelligence (AI) holds immense potential to revolutionize healthcare by improving efficiency, reducing costs and workloads, and enhancing diagnostic accuracy. Defined as computerized systems capable of tasks typically requiring human intelligence, AI applications are attracting significant investment from bodies like the European Union. However, despite rapid advancements, the practical Implementing Ai In Healthcare services has been notably slow. AI systems often present complexities, unpredictability, and a lack of established evidence, raising concerns about patient harm, bias, and privacy. Consequently, trust in AI, and the trustworthiness of these systems, have emerged as critical factors influencing adoption. Trust is foundational in healthcare, helping to manage uncertainty and complexity inherent in vulnerable patient situations. Yet, much AI research focuses on performance, ethics, and technical aspects like transparency, often overlooking the implementation context itself. Successfully integrating AI into routine clinical practice requires a deeper understanding of how trust operates within these change processes. This scoping review explores how the scientific literature conceptualizes trust in AI concerning healthcare implementation and identifies the key factors that influence this trust, aiming to clarify concepts essential for developing effective implementation strategies.
Methods: Scoping the Literature on Trust and AI Implementation
To explore the existing scientific knowledge, a scoping review methodology was employed, following the framework by Arksey and O’Malley and PRISMA-ScR guidelines. The review focused on two primary research questions:
- How is trust in AI conceptualized concerning implementation in healthcare?
- What factors influence trust in AI concerning implementation in healthcare?
An extensive literature search was conducted across five major electronic databases (PubMed, CINAHL, PsychINFO, Web of Science, Scopus) using terms related to implementation, AI, healthcare, and specifically “trust”. Standardized subject headings and truncations were utilized. Reference lists of identified articles were also manually reviewed.
Eligibility criteria ensured relevance to the research questions. Articles were included if they: (a) addressed “trust” in AI, (b) in relation to implementation in healthcare, (c) were published in English after 2012 (to focus on recent AI developments), and (d) were peer-reviewed. Studies merely mentioning trust without substantial discussion or those not linking trust to AI implementation in healthcare were excluded. Implementation was defined as “An intentional effort designed to change or adapt or uptake interventions into routines”.
Table illustrating the inclusion and exclusion criteria for the scoping review on AI trust in healthcare implementation.
Two reviewers independently screened titles/abstracts and subsequently full texts using Rayyan software, resolving disagreements through discussion with the wider author team. Data from included studies were charted using a standardized form covering characteristics like country, publication year, design, setting, aim, AI application area, user, and definition of trust. A thematic analysis using an inductive approach (following Braun and Clarke) was performed on the extracted data to address the research questions. Methodological quality assessment was not performed, consistent with scoping review guidance.
PRISMA flowchart detailing the study selection process for the scoping review on trust in AI healthcare implementation, starting from 815 records identified down to 8 included studies.
The search yielded 815 articles initially, reduced to 454 after removing duplicates. Abstract screening excluded 426 articles, primarily because trust was mentioned superficially or not linked to AI implementation in healthcare. Full-text review of the remaining 28 articles led to the exclusion of 20 more for similar reasons. Ultimately, eight articles met all inclusion criteria and formed the basis of this review.
Findings: Conceptualizing and Influencing Trust in AI Healthcare Implementation
Study Characteristics
The eight included studies were published between 2018 and 2022, predominantly originating from the United States (n=3), China (n=2), the UK (n=1), India (n=1), Canada (n=1), and the Netherlands (n=1). Six studies focused on hospital settings (e.g., radiology, dermatology, robotic surgery), while two addressed home healthcare management or healthcare generally. AI applications primarily involved diagnostics (n=4), with others covering brain modeling, image recognition, smart services, treatment, surgery, and communication. Methodologies included quantitative studies (n=4), opinion papers (n=3), and one mixed-methods study. The intended users varied, including clinicians (n=4), the general population (n=2), and patients (n=1).
Table summarizing the characteristics of the eight studies included in the scoping review, detailing country, year, design, setting, aim, AI application, user, and definition of trust.
Conceptualizations of Trust in AI
The review found varied conceptualizations of trust across the included studies. Six studies offered explicit definitions.
- Individual Perspective (Technology-Focused): Four empirical studies viewed trust primarily from an individual’s perspective, focusing on their propensity or willingness to rely on the AI’s capabilities. Trust was described as the perception of AI being dependable, reliable, and trustworthy for healthcare tasks.
- Contextual Perspective (Relational Focus): Two studies adopted a broader, contextual view, emphasizing trust as relational between people within the AI application context, rather than solely trust in the technology itself. One argued for developing the “human side,” highlighting trust relationships between patients, clinicians, and researchers. Another focused on trust within the clinical encounter, understanding it as belief in the trustworthiness of the encounter itself, encompassing both clinicians and the tools they use.
- Indirect Definition: Two studies defined trust implicitly through its determinants or described it as having a mediating role between system characteristics and AI use.
Factors Influencing Trust When Implementing AI in Healthcare
The thematic analysis identified three interconnected themes influencing trust during AI implementation in healthcare: Individual characteristics, AI characteristics, and Contextual characteristics.
Table detailing the results of the inductive thematic analysis, outlining the three main themes (Individual, AI, Contextual Characteristics) and their subthemes and codes influencing trust in AI healthcare implementation.
Individual Characteristics
These are qualities unique to individuals that shape their trust in AI implementation.
- Demographic Characteristics: Factors like gender (male), higher education, employment status (employed/student), and Western background were associated with higher trust in AI among the general population in one study. Age and education also moderated relationships between trust antecedents and behavioral intention.
- Disposition to Trust: An individual’s general tendency to depend on technology, influenced by life experiences and cultural background, impacted trust levels among clinicians.
- Knowledge and Skills: Familiarity with technology and prior usage experience influenced trust. Higher technological skills often correlated with greater trust, highlighting the potential need for education and training. Interestingly, radiologists’ existing familiarity with complex machinery meant ease of use was less of a concern for their adoption decisions.
- Personal Traits: Cognitive factors and personality, such as having a generally trusting disposition towards technology or a positive attitude, were linked to higher trust levels.
- Health Conditions: Personal health status and healthcare consumption patterns played a role. Individuals with chronic conditions expressed less trust in AI applications lacking physician interaction. Conversely, those utilizing less healthcare tended to show higher trust in AI.
AI Characteristics
Features inherent to the AI technology itself significantly impact trust.
- Individualization/Personalization: AI’s ability to tailor care based on unique patient health information enhanced trust. However, this requires sharing sensitive data, raising privacy concerns.
- Anthropomorphism: AI exhibiting humanlike characteristics (appearance, perceived self-consciousness, emotion) fostered a sense of social presence and increased trust.
- “Black Box” Nature: The non-transparent, self-learning, and autonomous aspects of some AI systems created uncertainty and eroded trust because users couldn’t fully understand the inputs and operations leading to decisions.
- Technical Objectivity: Characteristics like being data-driven, accurate, and lacking human biases or emotions were related to trust. In some scenarios, AI’s perceived objectivity could lead to results deemed more reliable than human experts, fostering trust.
Contextual Characteristics
This theme encompasses the broader environment in which AI is implemented and used.
- Healthcare Culture: The specific medical area, established professional expertise norms, and the opinions of influential figures heavily impacted trust. “Skilled clinicians” rely on tacit knowledge built over years within expert communities. Opinions of colleagues, seniors, and other clinicians significantly shaped initial trust levels. Perceived risks also varied by medical area (e.g., radiology vs. robotic surgery).
- Interpersonal Relationships: Collaboration, personal interactions, and mutual understanding among stakeholders (patients, clinicians, researchers) were crucial. Reduced communication due to AI implementation was perceived to decrease patient trust. Individuals valuing personal interaction showed less trust in AI across different medical fields.
- Governance: Clearly defined policies, standards, and guidelines were essential for building trust. A lack of clear governance frameworks in the medical context contributed to uncertainty and lower trust. Stakeholder-consented frameworks and goals were highlighted as important for enhancing trust and enabling self-governance. Policies encouraging clinician engagement in AI evaluation were suggested to promote responsible adoption.
Discussion: Towards a Holistic Understanding of Trust in AI Implementation
This review revealed significant variation in how trust is conceptualized and what factors are considered influential when Implementing Ai In Healthcare. While AI implementation research is growing, studies focusing explicitly on trust within this process are limited, recent, and predominantly from high-income nations.
Many empirical studies adopted a cross-sectional approach, measuring trust as individual attitudes towards AI capabilities, often based on limited or no practical experience with the tools. This reliance on perceptions rather than real-world usage experiences limits the applicability of findings for developing robust implementation strategies. These studies often neglect the crucial influence of context and underlying values.
The conceptualizations of trust varied, ranging from individual beliefs about technology to relational trust between people. Some studies defined trust indirectly through its determinants or positioned it as a mediator. This highlights the complexity of trust, suggesting it operates across multiple levels and dimensions. A holistic understanding is therefore essential.
The identified themes influencing trust—Individual, AI, and Contextual Characteristics—align well with established implementation science frameworks like the Consolidated Framework for Implementation Research (CFIR). Trust can be seen as an implementation outcome or mediating factor within such frameworks, influenced by various determinants across different domains (Intervention Characteristics, Outer Setting, Inner Setting, Characteristics of Individuals, Process).
Diagram mapping the identified determinants of trust in AI healthcare implementation onto the domains and constructs of the Consolidated Framework for Implementation Research (CFIR).
Individual characteristics like vulnerability (associated with lower education, unemployment, chronic conditions) were linked to lower trust, potentially reflecting perceptions of control and empowerment. The “black box” nature of AI also eroded trust, while knowledge and skills enhanced it, again suggesting a link to perceived control.
Contextual knowledge emerged as highly influential. Trust was linked to perceiving AI as meaningful and valuable within the specific healthcare context. Technical objectivity alone was insufficient; relational aspects like empathy and compassion, central to person-centered care, also mattered, explaining why anthropomorphism could enhance trust. The strong influence of healthcare culture (opinions of peers, established expertise) and the importance of interpersonal relationships and governance further underscore the need to understand AI implementation as a social and organizational process, not just a technical one.
This resonates with Normalization Process Theory (NPT), which views implementation as potentially challenging existing work practices and ways of thinking. NPT suggests collective sense-making is needed to understand roles, responsibilities, and values concerning AI use. Trust, as Luhmann suggested, thrives in a familiar world; therefore, implementing AI requires aligning the technology with existing values and negotiating new agreements within the specific healthcare context. Ignoring these contextual and value-based aspects can create significant barriers to trust and successful implementation.
Strengths and Limitations
Strengths include a systematic search strategy developed with a librarian, dual independent review for study selection, and adherence to scoping review guidelines. Limitations include the nascent nature of the research field, potentially limiting the number of studies. The inclusion of varied methodologies (empirical, opinion) means findings mix results and reflections. The review was limited to English-language publications, potentially missing relevant grey literature or non-English studies. The small number of included studies, mostly from high-income countries, limits generalizability.
Implications and Future Directions
The varied approaches to trust highlight the need for holistic perspectives in future research and practice. Focusing solely on individual attitudes or AI features is insufficient. Empirical studies should move beyond cross-sectional designs based on limited experience and explore trust development over time within real-world implementation settings. Research should investigate how AI can be aligned with existing healthcare values and how social interactions shape trust. The influence of “important others” (peers, leaders) warrants further investigation regarding their role in facilitating trust. Understanding how trust evolves as users gain experience and maturity with AI systems is another critical area for future longitudinal research.
Conclusion
Trust is a complex and multifaceted concept vital for successfully implementing AI in healthcare. This scoping review found that current scientific literature conceptualizes trust in diverse ways, often focusing on individual perceptions of AI capabilities or defining it through its determinants. Three key themes influence trust: individual characteristics, AI characteristics, and contextual characteristics. However, much existing research adopts a limited perspective, neglecting the crucial role of the implementation context, social interactions, and alignment with existing healthcare values. To navigate the challenges and unlock the potential of AI in healthcare, future research and implementation efforts must adopt a more holistic view of trust, actively considering the interplay between technology, individuals, and the complex socio-organizational environment of healthcare settings. Developing appropriate strategies to foster trust is paramount for translating AI’s promise into tangible benefits for patients and clinicians.
References
1. Petersson L, Larsson I, Nygren JM, Nilsen P, Neher M, Reed JE, et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res. (2022) 22:850. doi: 10.1186/s12913-022-08215-8
PubMed Abstract | CrossRef Full Text | Google Scholar
2. EPRS. Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts (2022). https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf (Accessed November 22, 2022).
3. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25:44–56. doi: 10.1038/s41591-018-0300-7
PubMed Abstract | CrossRef Full Text | Google Scholar
4. Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthcare J. (2021) 8(2):e188–94. doi: 10.7861/fhj.2021-0095
CrossRef Full Text | Google Scholar
5. Mehta N, Pandit A, Shukla S. Transforming healthcare with big data analytics and artificial intelligence: a systematic mapping study. J Biomed Inform. (2019) 100:103311. doi: 10.1016/j.jbi.2019.103311
PubMed Abstract | CrossRef Full Text | Google Scholar
6. European Commission. A European approach to artificial intelligence (2022). https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (Accessed November 9, 2022).
7. Sharma M, Savage C, Nair M, Larsson I, Svedberg P, Nygren JM. Artificial intelligence application in health care practice: scoping review. J Med Internet Res. (2022) 24:e40238. doi: 10.2196/40238
PubMed Abstract | CrossRef Full Text | Google Scholar
8. HLEG. Ethics guidelines for trustworthy AI (2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (Accessed March 2, 2023).
9. Gille F, Jobin A, Ienca M. What we talk about when we talk about trust: theory of trust in healthcare. Intell-Based Med. (2020) 1-2:100001. doi: 10.1016/j.ibmed.2020.100001
CrossRef Full Text | Google Scholar
10. Gille F, Smith S, Mays N. Why public trust in health care systems matters and deserves greater research attention. J Health Serv Res Policy. (2015) 20(1):62–4. doi: 10.1177/1355819614543161
PubMed Abstract | CrossRef Full Text | Google Scholar
11. Luhmann N. Trust and power. Cambridge: Polity Press (2017). 224.
12. Asan O, Emrah Bayrak A, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. (2020) 22:e15154. doi: 10.2196/15154
PubMed Abstract | CrossRef Full Text | Google Scholar
13. Luhmann N. Familiarity, confidence, trust: problems and alternatives. In: Gambetta D, editors. Trust: Making and breaking cooperative relations. Oxford: University of Oxford (2000). p. 94–107.
14. Dlugatch R, Georgieva A, Kerasidou A. Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care. BMC Med Ethics. (2023) 24:42. doi: 10.1186/s12910-023-00917-w
PubMed Abstract | CrossRef Full Text | Google Scholar
15. Hawley K. How to be trustworthy. Oxford, New York: Oxford University Press (2019). 176.
16. Ryan M. In AI we trust: ethics, artificial intelligence, and reliability. Sci Eng Ethics. (2020) 26(4):2749–67. doi: 10.1007/s11948-020-00228-y
PubMed Abstract | CrossRef Full Text | Google Scholar
17. O’Neill O. Linking trust to trustworthiness. Int J Philos Stud. (2018) 26(2):293–300. doi: 10.1080/09672559.2018.1454637
CrossRef Full Text | Google Scholar
18. Fernandes M, Vieira SM, Leite F, Palos C, Finkelstein S, Sousa JMC. Clinical decision support systems for triage in the emergency department using intelligent systems: a review. Artif Intell Med. (2020) 102:101762. doi: 10.1016/j.artmed.2019.101762
PubMed Abstract | CrossRef Full Text | Google Scholar
19. Zhang J, Zhang Z-M. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak. (2023) 23:7. doi: 10.1186/s12911-023-02103-9
PubMed Abstract | CrossRef Full Text | Google Scholar
20. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. (2019) 366:447–53. doi: 10.1126/science.aax2342
PubMed Abstract | CrossRef Full Text | Google Scholar
21. Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: a systematic review. Soc Sci Med. (2022) 296:114782. doi: 10.1016/j.socscimed.2022.114782
PubMed Abstract | CrossRef Full Text | Google Scholar
22. Trocin C, Mikalef P, Papamitsiou Z, Conboy K. Responsible AI for digitial health: a synthesis and a research agenda. Info Syst Front. (2021). doi: 10.1007/s10796-021-10146-4
CrossRef Full Text | Google Scholar
23. Gooding P, Kariotis T. Ethics and law in research on algorithmic and data-driven technology in mental health care: scoping review. JMIR Ment Health. (2021) 8:e24668. doi: 10.2196/24668
PubMed Abstract | CrossRef Full Text | Google Scholar
24. Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social consideration of AI-based medical-support tools: a scoping review. Int J Med Inf. (2022) 161:104738. doi: 10.1016/j.ijmedinf.2022.104738
CrossRef Full Text | Google Scholar
25. Beil M, Proft I, van Heerden D, Sviri S, van Heerden PV. Ethical consideration about artificial intelligence for prognosis in intensive care. Intensive Care Med Exp. (2020) 7:70. doi: 10.1186/s40635-019-0286-6
CrossRef Full Text | Google Scholar
26. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. (2021) 22:14. doi: 10.1186/s12910-021-00577-8
PubMed Abstract | CrossRef Full Text | Google Scholar
27. Coeckelberg M. Ethics of artificial intelligence: some ethical issues and regulatory challenges. Technol Regul. (2019) 1:31–4. doi: 10.26116/techreg.2019.003
CrossRef Full Text | Google Scholar
28. Gama F, Tyskbo D, Nygren J, Barlow J, Reed J, Svedberg P. Implementation frameworks for artificial intelligence translation into health care practice: scoping review. J Med Internet Res. (2022) 24:e32215. doi: 10.2196/32215
PubMed Abstract | CrossRef Full Text | Google Scholar
29. Svedberg P, Reed J, Nilsen P, Barlow J, Macrae C, Nygren J. Toward successful implementation of artificial intelligence in health care practice: protocol for a research program. JMIR Res Protoc. (2022) 11:e34920. doi: 10.2196/34920
PubMed Abstract | CrossRef Full Text | Google Scholar
30. Simon J. The routledge handbook of trust and philosophy. New York: Routledge (2020). 454.
31. Asan O, Yu Z, Crotty BH. How clinician-patient communication affects trust in health information sources: temporal trends from a national cross-sectional survey. PLoS ONE. (2021) 16:e0247583. doi: 10.1371/journal.pone.0247583
PubMed Abstract | CrossRef Full Text | Google Scholar
32. Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ. (2020) 98:245–50. doi: 10.2471/BLT.19.237198
PubMed Abstract | CrossRef Full Text | Google Scholar
33. Marková I. The dialogical mind. Common sense and ethics. Cambridge: Cambridge University Press (2016). 260.
34. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. (2018) 169:467–73. doi: 10.7326/M18-0850
PubMed Abstract | CrossRef Full Text | Google Scholar
35. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. (2005) 8:19–32. doi: 10.1080/1364557032000119616
CrossRef Full Text | Google Scholar
36. Booth A, Sutton A, Clowes M, Martyn-St James M. Systematic approach to a successful literature review. London: Sage Publications (2021). 424.
37. Peters MDJ, Marnie C, Colquhoun H, Garritty CM, Hempel S, Horsley T, et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Rev. (2021) 10(263):1–6. doi: 10.1186/s13643-021-01821-3
PubMed Abstract | CrossRef Full Text | Google Scholar
38. Datta Burton S, Mahfoud T, Aicardi C, Rose N. Clinical translation of computational brain models: understanding the salience of trust in clinician-researcher relationships. Interdiscip Sci Rev. (2021) 46:1–2. doi: 10.1080/03080188.2020.1840223
CrossRef Full Text | Google Scholar
39. Choi HH, Chang SD, Kohli MD. Implementation and design of artificial intelligence in abdominal imaging. Abdom Radiol. (2020) 45:4084–9. doi: 10.1007/s00261-020-02471-0
CrossRef Full Text | Google Scholar
40. Sheridan TB. Individual differences in attributes of trust in automation: measurement and application to system design. Front Psychol. (2019) 10:1117. doi: 10.3389/fpsyg.2019.01117
PubMed Abstract | CrossRef Full Text | Google Scholar
41. Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients’ perception toward human—artificial intelligence interaction in health care: experimental study. JMIR. (2021) 23:e25856. doi: 10.2196/25856
PubMed Abstract | CrossRef Full Text | Google Scholar
42. Reddy S, Allan S, Coghlan S, Cooper PA. A governance model for the application of AI in health care. J Am Med Inform Assoc. (2020) 27:491–7. doi: 10.1093/jamia/ocz192
PubMed Abstract | CrossRef Full Text | Google Scholar
43. Fan W, Liu J, Zhu W, Pardalos PM. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann Oper Res. (2018) 294:567–92. doi: 10.1007/s10479-018-2818-y
CrossRef Full Text | Google Scholar
44. McKnight DH. Trust in information technology. In: Davis GB, editors. The blackwell encyclopedia of management. Vol. 7 management information systems. Malden, MA: Blackwell (2005). p. 329–31.
45. Liu K, Tao D. The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Comput Human Behav. (2022) 127:107026. doi: 10.1016/j.chb.2021.107026
CrossRef Full Text | Google Scholar
46. Prakash AW, Das S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: a mixed-methods study. Info Manage. (2021) 58:103524. doi: 10.1016/j.im.2021.103524
CrossRef Full Text | Google Scholar
47. Mayer RC, Davis JH, Schoorman FD. An integrative model of organizational trust. Acad Manage Rev. (1995) 20:709–34. doi: 10.2307/258792
CrossRef Full Text | Google Scholar
48. Roski J, Maier EJ, Vigilante K, Kane EA, Matheny ME. Enhancing trust in AI through industry self-governance. J Am Med Inform Assoc. (2021) 28:1582–90. doi: 10.1093/jamia/ocab065
PubMed Abstract | CrossRef Full Text | Google Scholar
49. Yakar D, Ongena YP, Kwee TC, Haan M. Do people favor artificial intelligence over physicians? A survey among the general population and their view on artificial intelligence in medicine. Value Health. (2021) 25:374–81. doi: 10.1016/j.jval.2021.09.004
PubMed Abstract | CrossRef Full Text | Google Scholar
50. Braun V, Clarke V. Thematic analysis. In: Cooper H, editors. APA Handbook of research methods in psychology: research designs. Washington, DC: American Psychological Association (2022). p. 57–91.
51. Nilsen P. Overview of theories, models and frameworks in implementation science. In: Nilsen P, Birken SA, editors. Handbook on implementation science. Cheltenham: Edward Elgar Publishing Limited (2020). p. 8–31. https://www.elgaronline.com/display/edcoll/9781788975988/9781788975988.00008.xml
52. Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. (2020) 283:112461. doi: 10.1016/j.psychres.2019.06.036
PubMed Abstract | CrossRef Full Text | Google Scholar
53. May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. (2009) 4:29. doi: 10.1186/1748-5908-4-29
PubMed Abstract | CrossRef Full Text | Google Scholar
54. Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. (2017) 12:125. doi: 10.1186/s13012-017-0657-x
PubMed Abstract | CrossRef Full Text | Google Scholar
55. Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. Conceptualizing outcomes for use with the consolidated framework for implementation research (CFIR): the CFIR outcomes addendum. Implement Sci. (2022) 17:7. doi: 10.1186/s13012-021-01181-5
PubMed Abstract | CrossRef Full Text | Google Scholar
56. May C, Cummings A, Girling M, Bracher M, Mair FS, May CM, et al. Using normalization process theory in feasibility studies and process evaluations of complex healthcare interventions: a systematic review. Implement Sci. (2018) 13:18. doi: 10.1186/s13012-018-0758-1
PubMed Abstract | CrossRef Full Text | Google Scholar
57. May CR, Albers B, Bracher M, Finch TL, Gilbert A, Girling M, et al. Translational framework for implementation evaluation and research: a normalization process theory coding manual for qualitative research and instrument development. Implement Sci. (2022) 17:19. doi: 10.1186/s13012-022-01191-x
PubMed Abstract | CrossRef Full Text | Google Scholar
58. Coeckelberg M. Narrative responsibility and artificial intelligence: how AI challenges human responsibility and sense-making. AI Soc. (2021):1–4. doi: 10.1007/s00146-021-01375-x
CrossRef Full Text | Google Scholar
59. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. (2015) 10:53. doi: 10.1186/s13012-015-0242-0