Friday, April 14, 2000

Measurement of Operations Performance: An Invaluable Tool for Patient Care

Thomas P. (“Tip”) O’Neill, the late Speaker of the U.S. House of Representatives, once commented that “all politics is local.” So too are all emergency departments.

Our emergency departments’ clinical operations pose endless challenges, and the result I obtain in Brooklyn most assuredly will require a unique implementation in your world or it won’t work at all. Nonetheless, we can learn from each other, particularly if we are willing to look beyond the specific sought-after result to the processes that bring us to that result. “Best practices” described anywhere usually have some value everywhere. Garnering that value requires reliable measurement if success is to be obtained.

Measurement of operations performance is an invaluable tool for helping improve patient care and, as importantly, the patients’, community’s, and medical staff’s perception of the care we deliver in the ED. Yet resistance to measurement of operations performance is common among emergency physicians, and when measurement itself may be grudgingly accepted, arguing the validity of the findings is equally common.

Emergency physicians resist measurement of operations performance or disdain the results because by and large they were never taught how to use the results to improve care for patients. Unfortunately, most emergency physicians don’t take the time or are not afforded the opportunity to participate in interpreting results, and therefore they rarely gain the opportunity to learn about the processes of measurement and the value of the measurement. Rather, emergency physicians mostly learn about operational performance measures through someone else’s interpretation of a particular finding.

Bludgeon or Tool?

Too often, hospital or practice managers begin by using these measurements of operations performance as a bludgeon rather than a teaching tool. For example, over the past several years, physician productivity has become an oft-measured parameter, sometimes with dismal findings for one or more emergency physicians in the group. The dismal result and the management pronouncement that, “Something’s wrong, fix it,” fails to convey the importance of the finding and the real need to address it. It sets up a situation in which energy is diverted to a struggle over the quality of the data and the value of the particular measure.

Just as vital signs are vital but don’t tell the whole story about a given patient or clinical condition, so too are measures of physician productivity important but don’t tell the whole story of a physician’s value. Few of us make judgments in practice or in life on a single parameter. Even W. Edwards Deming, a statistician who popularized statistical-process-control and whose name has become nearly synonymous with continuous quality improvement (CQI), when speaking of his system of “Profound Knowledge” acknowledged that, “Not all facts, not all measures are known or knowable.”

Nonetheless, measurement has value in a variety of circumstances. Many practices may have one or more members who are perceived by some as unable to “move the room.” In practices with periods of multiple physician coverage, one physician may be seen as “slow,” causing colleagues to avoid double-coverage shifts with him. When confronted, the “slow” physician may ask for a more objective measure of his performance.

As an alternative example, a new member of the practice or perhaps a recent residency graduate may be thought of as a “slow” physician because of a subjective sense of the state of the department when this particular physician is working. In both of these instances, some measure, used over time to track change in performance, may be sought by the practice leadership, the physician, or hospital management. Many people singled out for examination seek objectivity and reliability in the evaluation. Yet, others remain uncomfortable with measurement, seeing productivity measurement as a two-edged sword that can be more harmful than helpful. I disagree; evaluation or improvement of operations performance requires measurement.

Credible Measurement Vital

Credibility in measurement is vital. Producing and exhibiting a number is a pointless exercise in an environment where no other numbers and the underlying data are unavailable to the subjects of the measurement. While measurement and reporting shouldn’t wait for each individual’s review of every datum, practices and EDs lacking transparency are not environments in which productivity measures, regardless of how carefully produced and published, will have the respect of those physicians subjected to measurement. Without respect for the process of measurement, there can be no hope for constructive change among the physicians or others whose work is subjected to measurement.

Some physicians are quick to dispute the validity of clinical operations measurements. Trained in science and the scientific method, most physicians are notably slow to embrace the “management analysis” implicit in clinical operations management. Management analysis shouldn’t be confused with the scientific method. The validity physicians anticipate when deploying prospective hypothesis testing differs from the validity of management analysis examining an operations issue. Accordingly, precision of measurement — by this I mean especially the repeatability of the method and the reliability of the result — are more important than absolute accuracy. Consequently, single measurements are of scant value; yet, repeated measurements over a period of time often prove a powerful and reliable tool.

Thus, the second point: Start measuring now. Don’t wait until you change something or have improvement efforts underway. Nothing will speak so resoundingly of your success in the future as a measurement with a dismal finding at the start and a strikingly positive trend over time. Beginning the process of measurement speaks to a seriousness of purpose that may be used to persuade your critics in management or elsewhere that you are committed to evaluation and, as necessary, improvement.

As I mentioned, even vital signs don’t tell the whole story about a patient, and neither will any one measurement of operations performance. Thus my third point: No single measurement itself induces immediate action. Responsible leaders will not be stampeded into imprudent action based on a single observation. Whatever the interval of measurement, usually seven or more observations should be recorded before any action is contemplated. Thus, one could wait seven months before acting on a measurement made monthly. Rather than acting precipitously, it would be better to measure more frequently than monthly, if the pressures of the real world require action sooner.

Improvement of operations performance requires measurement, which is best started as soon as possible, but no single measurement itself requires an immediate response. We have no choice in the matter, for without measurement how will we know that the changes we plan to undertake are an improvement?