Electronic Health Record Usability Issues and Potential Contribution to Patient Harm

Accepted for Publication: January 26, 2018.

Corresponding Author: Raj Ratwani, PhD, National Center for Human Factors in Healthcare, MedStar Health, 3007 Tilden St NW, Ste 7L, Washington, DC 20008 (ten.ratsdem@inawtar.m.jar).

Author Contributions: Dr Ratwani had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Howe, Adams, Ratwani.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: All authors.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Howe, Adams, Ratwani.

Obtained funding: Ratwani.

Administrative, technical, or material support: All authors.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.

Funding/Support: This project was funded by grant R01 HS023701-02 from the Agency for Healthcare Research and Quality (AHRQ) of the US Department of Health and Human Services (Dr Ratwani).

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The opinions expressed in this document are those of the authors and do not reflect the official position of AHRQ or the US Department of Health and Human Services.

Received 2017 Dec 5; Accepted 2018 Jan 26. Copyright 2018 American Medical Association. All Rights Reserved.

This study analyzed patient safety reports in and near Pennsylvania from 2013 through 2016 to identify those that contained explicit language associating possible patient harm with an electronic health record usability issue.

Electronic health record (EHR) usability, which is the extent that EHRs support clinicians in achieving their goals in a satisfying, effective, and efficient manner, is a point of frustration for clinicians and can have patient safety consequences. However, specific usability issues and EHR clinical processes that contribute to possible patient harm across different health care facilities have not been identified. We analyzed reports of possible patient harm that explicitly mentioned a major EHR vendor or product.

Methods

This study was approved by the MedStar Health Institutional review board with an informed consent waiver. Patient safety reports, which are free-text descriptions of safety events, were analyzed from 2013 through 2016. Reports were retrieved from the Pennsylvania Patient Safety Authority database, which collects reports from 571 health care facilities in Pennsylvania, and from a large multihospital academic health care system in the mid-Atlantic, outside of Pennsylvania. Reports were voluntarily entered by health care staff, mostly nurses, and included several sentences describing the safety event, contributing factors, and categorization of effect on the patient. This categorization indicates whether the event reached the patient (meaning additional health care services were required), whether there was harm at the time of reporting, or the potential for harm to the patient. The harm categories were (1) reached the patient and potentially required monitoring to preclude harm, (2) potentially caused temporary harm, (3) potentially caused permanent harm, and (4) could have necessitated intervention to sustain life or could have resulted in death.

Reports were included for analysis if 1 of the top 5 EHRs (vendors or products; based on the number of health care organizations attesting to meeting meaningful use requirements with that EHR) was mentioned and if the report was categorized as reaching the patient with possible harm. Two usability experts reviewed reports to determine if the report contained explicit language to associate the possible harm report with an EHR usability issue. Usability-related reports were further categorized into 1 of 7 usability topics to describe the primary usability challenge, based on a synthesis of previous taxonomies, and categorized into 1 of 4 EHR clinical processes, based on existing categories of EHR interactions ( Table 1 ). A subset of the data (15%) were dually coded. Inter-rater reliability κ scores were 0.9 (usability as a contributing factor), 0.83 (usability category), and 0.87 (EHR clinical process).

Table 1.

Definitions and Examples of Electronic Health Record (EHR) Usability and Clinical Processes Issues Identified in Possible Patient Harm Reports, 2013-2016 a

DefinitionExample
Usability Issue b
Data entryEHR data entry is difficult or not possible given the clinicians’ work process preventing the clinician from appropriately entering desired informationPharmacist searched for the q24hr entry in the EHR by typing “q24h” Enter and Enter again, which pulls up “Q24HP” and “Q24HR,” but because she hit Enter a second time, it is assumed that the selection is q24hr; the EHR populates all frequencies in alphabetical and numerical order
AlertingEHR alerts or other feedback are inadequate because they are absent, incorrect, or ambiguousAllergy alert did not fire to prescriber even though gelatin allergy was listed in the EHR
InteroperabilityEHR interoperability is inadequate within components of the same EHR or from the EHR to other systems, hindering the communication of informationPatient was admitted as a trauma; the lab value did not flow into the EHR when the patient identification was confirmed
Visual displayEHR display of information is confusing, cluttered, or inaccurate resulting in clinician difficulty interpreting informationThe orders in the EHR still showed the medication from the previous 2 administrations at the correct dose (unchanged), but dated for the previous day, which is subtle to notice in a long list of medications
Availability of informationEHR availability of clinically relevant information is hindered because information is entered or stored in the wrong location or is otherwise inaccessibleI placed postoperation orders in EHR; they were initiated and I signed them; the perianesthesia nurse called and said they had “failed”; on the orders menu, all orders had failed; I was unable to place new orders, the nurse was unable to initiate old orders
System automation and defaultsThe EHR automates or defaults to information that is unexpected, unpredictable, or not transparent to the clinicianYesterday, I was entering a patient's warfarin dose to start October 1 at 8:00 pm ; when I entered in the time, I did not realize the EHR had defaulted to October 2 at 8:00 pm before pushing the order through
Workflow supportThe EHR workflow is not supported due to a mismatch between the EHR and the mental state of the end userA test ordered by the office through the EHR was “thyroid group”; the specimen was drawn and ordered by the laboratory; one part of the thyroid test was not performed because it was a confusing translation between the physician order and the EHR
Clinical Process c
Order placementPlacing or relating to a clinical order (eg, admission, laboratory, referral, medication, procedure)A physician put his orders in the EHR (patient was in the recovery room postsurgery) 15 min after physician left the hospital; the orders should have been active; unable to pull the medications to administer to the recovery room patient; but the orders appeared to be completed; on the Medication Administration Record everything was shadowed grey; I spoke to the medical-surgical charge nurse to see if the floor discontinued the orders, and she stated this has happened several times on the night shift regarding physician orders being discontinued and or disappearing
Medication administrationRelating to adverse drug reactions, wrong dose, duration, concentration, timing, route, etcPatient was given an additional dose of diltiazem today; tasked to start at 6 am , given at 5:30 am by the night shift nurse; another task fired at 10:00 am because the medicine was written as daily; given at 12:30 pm when patient returned from her test
Review of resultsReceiving or viewing a clinical result on the intended patient (eg, laboratory, pathology, imaging)Gentamicin trough ordered for 5 am ; it was sent to the lab and the level came back as 1.6; a dose of gentamicin was given; nursing missed the level being high because it showed up as “within normal limits” in EHR values; should have EHR recognize high levels for neonatal population
DocumentationAccurate recording and reviewing of health information, status, treatment, planning, etc of a patientPatient with 2 EHR encounters admitted; physician orders under 1 encounter, unit documentation on another encounter; cannot combine, cannot see orders on both encounters

Abbreviation: q24hr, once every 24 hours.

a Data were from the Pennsylvania Patient Safety Authority Database and a large multihospital academic health care system in the mid-Atlantic, outside of Pennsylvania.

b Usability issue categories were based on a synthesis of existing taxonomies. Reports were categorized based on expert review to identify the primary usability issue that contributed to the possible harm event.

c Clinical process categories were based on existing categories of EHR interactions. Reports were categorized based on expert review to identify the primary EHR interaction that contributed to the possible harm event.

Results

Of 1.735 million reported safety events, 1956 (0.11%) explicitly mentioned an EHR vendor or product and were reported as possible patient harm and 557 (0.03%) had language explicitly suggesting EHR usability contributed to possible patient harm. Harm level analysis of the 557 reports were reached the patient and potentially required monitoring to preclude harm (84%, n = 468), potentially caused temporary harm (14%, n = 80), potentially caused permanent harm (1%, n = 7), and could have necessitated intervention to sustain life or could have resulted in death (

Of the 7 usability categories, challenges were data entry (27%, n = 152), alerting (22%, n = 122), interoperability (18%, n = 102), visual display (9%, n = 52), availability of information (9%, n = 50), system automation and defaults (8%, n = 43), and workflow support (7%, n = 36). Of the 4 EHR clinical processes, usability challenges occurred during order placement (38%, n = 213), medication administration (37%, n = 207), review of results (16%, n = 87), and documentation (9%, n = 50) ( Table 2 ).

Table 2.

Frequency of Clinical Process and Usability Issues Identified in the Possible Patient Harm Reports, 2013-2016 a

Clinical Processes IssuesUsability Category, No. of Possible Patient Harm Events (%)
Availability of InformationAlertingSystem Automation and DefaultsData EntryVisual DisplayInteroperabilityWorkflow SupportTotal
Documentation6 (1.1)2 (0.4)2 (0.4)33 (5.9) b 2 (0.4)3 (0.5)2 (0.4)50 (9)
Medication administration13 (2.3)77 (13.8) b 22 (3.9)44 (7.9) b 22 (3.9)19 (3.4)10 (1.8)207 (37.2)
Order placement25 (4.5)35 (6.3) b 19 (3.4)56 (10.1) b 18 (3.2)38 (6.8) b 22 (3.9)213 (38.2)
Review of results6 (1.1)8 (1.4)019 (3.4)10 (1.8)42 (7.5) b 2 (0.4)87 (15.6)
Total50 (9)122 (21.9)43 (7.7)152 (27.3)52 (9.3)102 (18.3)36 (6.5)557 (100)

a Data were from the Pennsylvania Patient Safety Authority Database and a large multihospital academic health care system in the mid-Atlantic, outside of Pennsylvania.

b Value greater than 5%.

Discussion

EHR usability may have been a contributing factor to some possible patient harm events. Only a small percentage of potential harm events were associated with EHR usability, but the analysis was conservative because safety reports only capture a small fraction of the actual number of safety incidents, and only reports with explicit mentions of the top 5 vendors or products were included.

Patient safety reports contain limited information making it difficult to identify causal factors and may be subject to reporter bias, inaccuracies, and a tendency to attribute blame for an event to the EHR. Additional research is needed to determine causal relationships between EHR usability and patient harm and the frequency of occurrence. Although federal policies promote EHR usability and safety, additional research may support policy refinement.

Notes

Section Editor: Jody W. Zylke, MD, Deputy Editor.

References

1. Institute of Medicine Health IT and patient safety building safer systems for better care . https://www.nap.edu/catalog/13269/health-it-and-patient-safety-building-safer-systems-for-better. Accessed February 19, 2018. [PubMed]

2. International Organization for Standardization ISO/IEC 25010:2011—Systems and software engineering—Systems and Software Quality Requirements and Evaluation (SQUARE) . https://www.iso.org/obp/ui/#iso:std:iso-iec:25010:ed-1:v1:en. Accessed February 19, 2018.

3. Ellsworth MA, Dziadzko M, O’Horo JC, Farrell AM, Zhang J, Herasevich V. An appraisal of published usability evaluations of electronic health records via systematic review . J Am Med Inform Assoc . 2016; 24 ( 1 ):218-226. [PMC free article] [PubMed] [Google Scholar]

4. Office of the National Coordinator for Health Information Technology Certified health IT developers and editions reported by hospitals participating in the Medicare EHR incentive program . https://dashboard.healthit.gov/quickstats/pages/FIG-Vendors-of-EHRs-to-Participating-Hospitals.php. Accessed April 7, 2016.

5. Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement . Jt Comm J Qual Improv . 1995; 21 ( 10 ):541-548. [PubMed] [Google Scholar]

6. Ratwani RM, Benda NC, Hettinger AZ, Fairbanks RJ. Electronic health record vendor adherence to usability certification requirements and testing standards . JAMA . 2015; 314 ( 10 ):1070-1071. [PubMed] [Google Scholar]