Clinical decision support systems (CDSS) are a significant part of EHR systems.1 CDSS may utilize artificial intelligence (AI) or machine learning (ML) algorithms to identify patterns that produce beneficial results in care delivery.2 However, using AI/ML to enhance care delivery and healthcare outcomes poses a historical racial concern among certain minority populations.3
The CDSS may be a standalone software or part of the EHR to help providers in decision-making by analyzing patient data to deliver the right information to the right clinician at the right time and in the right setting.4 Other components of EHR design that improve health outcomes include expansion software applications, such as essential care coordination tools, e-prescriptions, patient portals, and EHR security systems.5
There are two main types of CDSS:
- The diagnosis decision support system (DDSS), where inputting patient data generates a list of possible patient diagnoses with the combination of clinicians’ expertise to determine the patient diagnoses; and
- The case-based reasoning (CBR), which uses knowledge and data from previous cases to guide new or existing cases. Clinicians review previous cases and determine the best treatment plan on a case-by-case basis.
Irrespective of CDSS type, the clinician determines the next steps for treatment which may be further testing or determining a final diagnosis.6 This viewpoint will address the challenges of the CDSS and improvement strategies from a person-in-environment (PIE) perspective comprising the micro (individual), mezzo (groups and community organizations), and macro (government and regulatory boards) levels.7
Challenges of the CDSS
Integrating the CDSS with the EHR improves effective and timely patient care.8 Studies show that CDSS reduces errors and promotes quality patient care.9 CDSS are great tools for lowering medication prescription errors and adverse drug effects.10
Some challenges of the CDSS include interoperability issues (e.g., lack of fast healthcare interoperability resources [FHIR]), which may disrupt clinical workflow and negatively impact clinicians’ work.11 The relevance of the data is another challenge, as an overwhelming amount of irrelevant data may overwhelm clinicians and negatively impact the usage of the CDSS.12 Translating and recording the rapid growth of clinical research data to improve the CDSS for meaningful use is a challenge that AI/ML algorithms may address.13
Like computer viruses, poorly constructed algorithms may spread bias at a rapid pace, leading to inequities in the form of exclusionary experiences and discriminatory practices in healthcare.14
The medical and scientific professions explain and categorize disease prevalence and outcomes by race,15 such that race plays a significant role in understanding the human genome, establishing it as a factor in risk estimation and treatment selection.16 Some diseases have a higher prevalence in individuals of a particular ancestry and therefore inform the clinical algorithms and tools that form the CDSS, such as clinical calculators and screening metrics for such diseases.17
The CDSS assigns varying risk levels to certain conditions. For example, cardiology algorithms use systolic blood pressure, blood urea nitrogen, sodium, age, heart rate, history of COPD, and race (African American or non-African American) to identify African Americans as a lower-risk population to increase the threshold for recommending medical intervention.18 Such implementation of DDSS may have unintended discriminatory or financial consequences should clinicians accept or override DDSS alerts and reminders in patient care diagnostics.19
Historical research biases and inequities, as observed in the Tuskegee Syphilis Study and Henrietta Lacks’ cells for scientific research, still negatively impact African American participation in research and may account for some reasons why the CDSS lacks suitable algorithms to provide accurate diagnostic recommendations in care delivery.20
Major studies discuss the clinician’s implicit bias or lack of cultural competence as the culprit on the micro level21; this viewpoint will introduce organizational culture and standard of care algorithms in CDSS design as contributors to the unintended discriminatory or financial consequences at the mezzo and macro levels, respectively.
The Micro, Mezzo, and Macro Levels of Influence
From the PIE perspective22, micro-level bias thrives by reinforcements at the mezzo and/or macro levels. The CDSS design increases the threshold for clinical interventions toward specific populations and is a recognizable and acceptable standard of medical practice across the United States.23
The national acceptance of the CDSS algorithms using readily available data that may underrepresent minority populations at the macro level creates the reinforcement that places a clinician with the burden of choosing to accept a diagnosis or request further testing to validate the results from the AI/ML powered DDSS or CBR.24
The standard regulation for a clinician should be working with the CDSS to achieve the best possible intervention; however, clinicians should consider employing good judgment and expertise. Organizational culture may influence the clinicians’ judgment and expertise at the mezzo level.25 Clinicians are less likely to order more testing in organizations that are big on saving costs and more likely to order further testing when the clinician feels psychologically safe to demonstrate sound judgment. The clinician’s decision positively or negatively affects the patient every time the clinician decides.
A primary goal of the CDSS is to lower the cost of care. An organization may consider the overriding of the CDSS recommendations as contributions to financial waste, or it may appreciate the additional revenue generation leading to the incentivizing or disincentivizing of the clinicians’ efforts. At the micro level, studies identify implicit bias and cultural competence as significant culprits influencing the clinician’s judgment.26
Conclusion
Reassessing clinical data use in the design of CDSS algorithms should inform the future improvement of the CDSS and EHR across racial minority populations in clinical study research designs. Organizations seeking to improve the EHR should determine the accuracy of racial algorithms in patient care while using the CDSS. An improved CDSS will significantly improve patient care delivery.
MGMA and CAHME
This article was one of two finalists in the winter 2022 call for papers around diversity, equity, and inclusion (DEI) and health equity, sponsored by MGMA and the Commission on Accreditation of Healthcare Management Education (CAHME). To learn more about MGMA student memberships for individuals enrolled in CAHME-accredited programs of study, visit mgma.com/cahme.
Notes:
1. Bowman S. “Impact of electronic health record systems on information integrity: Quality and safety implications.” Perspect Health Inf Manag. 2013;10:1c.
2. Tong M, Artiga S. “Use of Race in Clinical Diagnosis and Decision Making: Overview and Implications.” KFF. Dec. 9, 2021. Available from: http://bit.ly/3kzECEB.
3. Ibid.
4. Graber ML, Byrne C, Johnston D. “The impact of electronic health records on diagnosis.” Diagnosis. 2017;4(4):211-223. doi: 10.1515/dx-2017-0012.
5. Tubaishat A. “The effect of electronic health records on patient safety: A qualitative exploratory study.” Inform Health Soc Care. 2019;44(1):79-91. doi:10.1080/17538157.2017.1398753.
6. Graber, et al.
7. Tyler S. “The Person in Environment.” Human Behavior and the Social Environment. Published online May 26, 2020. Available from: bit.ly/3kCLhhi.
8. Bowman.
9. Tubaishat.
10. Ibid.
11. Ibid.
12. Bowman.
13. Ibid.
14. Vyas DA, Eisenstein LG, Jones DS. “Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms.” N Engl J Med. 2020;383(9):874-882. doi:10.1056/NEJMms2004740.
15. Ibid.
16. Tong, Artiga.
17. Ibid.
18. Vyas, et al.
19. Graber, et al.
20. Tong, Artiga.
21. Ibid.
22. Tyler.
23. Tong, Artiga.
24. Graber, et al.
25. Mannion R, Davies H. “Understanding organisational culture for healthcare quality improvement.” BMJ. 2018;363:k4907. doi:10.1136/bmj.k4907.
26. Graber, et al.