Tuesday, May 13, 2008

What’s so great about CPOE?

My recent criticism of electronic medical records (EMRs) has focused on documentation templates. Now that I’m on a roll with EMR posts I may as well cover the other side of EMRs---computerized physician order entry (CPOE). The push for universal adoption of CPOE is on. Leaders in the patient safety movement tell us it’s a good thing. So what’s so good about it?

For starters let’s look at the description in AHRQ’s glossary:

Physicians (or other providers) directly enter orders into a computer system that can have varying levels of sophistication. Basic CPOE ensures standardized, legible, complete orders, and thus primarily reduces errors due to poor handwriting and ambiguous abbreviations.

It goes on to point out that many systems have decision support built in: suggested doses, allergy alerts and order sets which conform to evidence based practices. More sophisticated systems may integrate patient data such as weight and creatinine clearance.

No one would dispute that these are good ideas. The real rub is how CPOE does in the real world. Do the benefits outweigh the unintended consequences? Bob Wachter, who has written a couple of recent posts on health care technology, seems to think they do. In last Friday’s post he referred readers to AHRQ’s patient safety network where a search on CPOE yields 174 citations. But the top hit is an article on unintended consequences and contains a link to the fabled Pittsburg study showing an increased mortality following the implementation of CPOE. Then there’s this study showing that CPOE actually creates errors. Clearly there’s a trade off between errors intercepted by CPOE (which may or may not have been intercepted “downstream” in traditional paper based systems) and new and unanticipated types of errors introduced by CPOE.

What is the net result in terms of patient safety? To answer that question we need outcome based data. Such data are sparse, but the Pittsburg study is concerning. To be fair, the negative results of that study may reflect learning curve issues more than inherent risks of CPOE itself. On the positive side, Wachter cites this study. But it’s from Brigham and Women's Hospital, raising questions about real world generalizability. Moreover, the significance of the error reduction attributable to CPOE in the study is unclear from the paper. This very recent systematic review demonstrated CPOE’s ability to intercept many errors but failed to show improvement in patient outcomes.

What’s my bottom line as of May 13, 2008? CPOE is a great idea. CPOE has potential. But the boosters of CPOE have a burden of proof which they have yet to satisfy. It has not been proven to help patients. Why is there such a disconnect between theory and real world results? The downside in terms of creating new errors is well documented.

But their’s a less tangible downside. For clinicians, CPOE is a distraction. What do I mean by that? It adds a new burden to our work flow: order processing. Doctors are trained to focus on clinical issues. We need to know what drug to give, when and how much, and what tests to order. We are not trained in how to search the computer for the appropriate orders, how to customize our therapy when the computer’s options are limited, how to be sure that our order entry is properly routed, or how to devise workarounds that are inevitably necessary in such systems. Those issues, formerly in the domain of clerical employees, are now foisted on doctors. They are time consuming and they take away from our clinical focus. The challenge for CPOE development is to create systems that allow doctors to concentrate on the clinical problem at hand, free of questions such as “Where can I find the basal/bolus insulin protocol?” or “Am I sure I entered this right?”

Wachter’s other post puts CPOE in the helpful perspective of the “Technology hype cycle”. New technologies are initially met with unwarranted enthusiasm. Then there follows a back lash when they don’t meet their initial expectations. Finally, gradually, the level of acceptance finds a middle ground in which users appreciate the benefits but realize that the technology isn’t nearly as good as originally hyped. It’s a useful model to keep in mind because it gives us a road map, a kind of sense of where we’ve been and where we hope to end up.

No comments: