Jerry T McKnight & Pat Norton. Handbook of Death and Dying. Editor: Clifton D Bryant. Volume 1. Thousand Oaks, CA: Sage Reference, 2003.
Humans inherently make mistakes. Sometimes these mistakes are foolish; most of the time, mistakes teach us lessons. In most instances, the world in general does not learn of individual humans’ mistakes. In the worst cases, however, mistakes lead to bodily harm or death, events that are frequently reported in the news media. Important examples include the 1984 chemical spill in Bhopal, India; the nuclear mishap at Three Mile Island, Pennsylvania, in 1979; and the recent case of the installation of defective tires on Ford Explorers. Human errors also occur in the medical profession, and these errors can be costly in terms of loss of life and other consequences of human tragedy. In this chapter, our chief focus is on deaths stemming from medical errors; we address current trends in the numbers and types of errors that occur, how the public perceives medical errors, the effects of such errors on society, and why the prevention of these errors is not easily achieved. We begin, however, with a brief discussion of some industrial accidents that have resulted from human error.
Human error has led to some of the most notable industrial accidents over the past 20 years. One of the largest of these was the 1984 disaster at the Union Carbide plant in Bhopal, India, which resulted in at least 3,000 deaths, approximately 500,000 injured, and the closing of the factory (Gottesfeld 1999). The immediate cause of the accident was water seeping into a chemical storage tank, which caused an uncontrollable chemical reaction; however, the longer-term cause was a lapse in safety standards and maintenance procedures at the plant over a period of months prior to the accident. In addition, the plant itself had been constructed using low-quality materials, and plant staff were poorly trained and largely inexperienced. India does not have any laws in place requiring the kinds of chemical release prevention and emergency response measures that would have prevented this accident or greatly reduced its deleterious impact.
In 1979, the nuclear reactor at Three Mile Island experienced a partial meltdown. Although no one died as a direct result of this accident, more than 2,000 personal injury claims were filed based on the negative health influence of gamma radiation exposure (Public Broadcasting Service 1998). The meltdown was the result of more than 2 hours of misreading of the machinery by workers at the plant. Once the problem was discovered, the plants’ crew and engineers took an additional 12 hours to reach a consensus on appropriate corrective action.
The separation of treads on the Firestone tires that Ford Motor Company installed on its Explorer model SUV led to a series of accidents beginning in the mid-1990s and increasing in frequency through 2001, when consumer complaints led to a recall of the tires and lawsuits against both Firestone and Ford. Although some data suggest that the Explorer’s design increased the likelihood of vehicle rollover in a crash, Firestone bore the brunt of the responsibility for the accidents for distributing poorly manufactured tires. An investigation by the National Highway Transportation Safety Board led to allegations that Firestone had used substandard rubber in tires when supplies of higher-quality rubber ran low, employed lackadaisical inspection practices, had workers puncture air bubbles on the insides of tires, and used poorly trained workers when its union labor force went on strike. Both Ford and Firestone issued recalls and were charged in classaction suits seeking recovery of inspection and replacement costs and, in some instances, damages for deaths and/or injuries resulting from wrecks caused by the separation of the tire treads.
All of these results of human error are tragic, but they are easy to relegate to the category of “things that happen to others.” Most people do not work in or live near a chemical or nuclear plant. Most individuals do not own dangerous cars. It is almost a certainty, however, that at some point every individual will interact with the medical community, whether through hospitalization or a routine visit to the doctor’s office. Americans have historically held medical professionals in high regard, but these individuals are, after all, only human, and they make mistakes. The sensationalized nature of the popular and news media coverage of medical errors serves notice that we could easily be the next victims.
Medical errors are an integral part of medical practice, in both hospital and clinic settings. With the advent of freestanding surgical centers where many outpatient surgical procedures are performed, the arena of medical errors has grown. Medical errors extend also to nursing homes, home health care settings, and local pharmacies. Due to aggressive reporting of newsworthy medical errors, most Americans are well aware of the dangers associated with interfacing with the medical system. As pharmacology, medical procedures (both diagnostic and therapeutic), and other interventions have advanced, the risks of medical errors have increased. The problem of medical errors is both cause for concern and part and parcel of the human experience within medical practice.
For some time, many Americans have been aware that medical errors occur, but the full extent of the problem was first delineated for the public at large by a report produced by the Institute of Medicine’s Committee on the Quality of Health Care in America in 1999. The IOM committee’s findings are compiled in a volume titled To Err Is Human (Kohn, Corrigan, and Donaldson 2000), which reports that as many as 98,000 people die each year in U.S. hospitals due to medical errors. Moreover, because this figure does not include outpatient deaths, the actual number of medical error-related deaths may be much higher. Prior to publication of the IOM report, many books and articles had been produced that tended to increase patient anxiety. Some of these include The Incompetent Doctor: Behind Closed Doors (Rosenthal 1995), which discusses the types of mistakes doctors make and the medical profession’s seeming inability to self-regulate; The Unkindest Cut: Life in the Backrooms of Medicine (Millman 1977), which details 2 years of sociological observations at a private, university-affiliated hospital in the United States, focusing on conversations and actions in the operating rooms, in the emergency department, at morbidity and mortality conferences, and at various hospital meetings; and The Medical Racket: How Doctors, HMOs, and Hospitals Are Failing the America Patient (Gross 1998), which is highly critical of all the entities named in its title. These books and others are promoted to a public that is often unable to distinguish between true warning calls and sensationalist reporting.
Books of this ilk often promote increased anxiety in medical patients and their families, although perhaps unintentionally. In addition, they help to feed suspicion in the minds of many, effectively creating an “on guard” type of attitude in both patients and physicians. Such suspicion and distrust on the part of either or both parties can create an atmosphere of “negative energy” in physician-patient encounters. Often, these negative emotions impair doctors’ ability to follow their instincts and the patients’ ability to heal. Such stress has the potential to create adversarial relationships between patients and their health care providers, and this unfortunately increases the risks to patients. For example, when a doctor senses that a patient is suspicious or aggressive, he or she is likely to order more tests and ask for more consultations for that patient. Although such increased thoroughness may seem like a good idea, each additional test or consultation carries its own attendant risks for the patient.
Health maintenance organizations, or HMOs, constitute a special category that deserves mention in any discussion of medical mistakes leading to death. We have all heard and read news stories about cases in which HMOs have with-held care from particular patients. Actually, patients are free to obtain any medical treatments they want, but because most cannot afford to pay for any treatments themselves, seeking care from providers outside their insurance coverage is not always a viable option.
The concept of HMOs has existed for many years, but HMOs were not formally established and defined as legal entities until passage of the federal HMO Act in 1963. This legislation, which was amended in 1976, 1978, and 1981, served to establish HMOs’ legitimacy as a means of providing low-cost medical care to the U.S. population. The federal government has also provided funds for the development of HMOs. Over time, HMOs have shifted from being nonprofit organizations to being for-profit businesses; as a result, many HMO owners and administrators have made huge profits.
HMOs established the “gatekeeper” concept in medicine. In such a system, the gatekeeper—a primary care physician, usually a family physician, internist, or pediatrician—sees each patient initially and then makes appropriate referrals to other specialists or subspecialists. The main problem with this system is that it can make it appear that the primary care physician is keeping patients from seeing specialists; this is ultimately an uncomfortable role for primary care physician, because it tends to place physician and patient in adversarial roles. The more correct, appropriate, and comfortable role for the primary care physician is that of “gateway.” In such a role, the primary care physician willingly refers patients to appropriate consultants when needed, serving as a conduit or facilitator of patients’ health care needs.
The HMOs’ use of primary physicians as gatekeepers grew more dangerous with the implementation of a payment system called capitation. Under this system, primary care physicians are paid a flat fee per patient per month (insurance companies refer to this as “per covered life per month”). A primary care physician is given a specific amount of money per month based on a patient’s age and medical condition (i.e., if a physician receives $15 per month for a given patient, this would total $180 for all the care provided that patient for a given fiscal year). This system allows HMO physicians to make money without actually seeing every patient. Doctors with “panels” of healthy patients make money without seeing their patients often, whereas doctors with panels of sick patients can go through the pay per service equivalent in just a few office visits. The result of the use of the capitation system is that many sicker patients in HMOs may not receive adequate numbers of office visits. The HMOs created a system in which withholding care from patients is in the physician’s financial interest.
Unfortunately, more than a few patients have been harmed by their HMOs. In a book titled Do HMOs Cut Costs… and Lives? (1997), Dr. Emerita Gueson presents an angry attack on HMOs, citing many examples of their failures to help patients. The book’s title is clearly a rhetorical question, as in the book’s dedication Gueson lists a number of patients and their families whom she describes as victims of a failed health care system.
Recently, a consumer-led revolution against perceived problems in the health care insurance industry has resulted in proposed legislation that would make HMOs more legally liable for their errors. In the past, HMOs were virtually immune from prosecution, but negative public opinion (as evidenced by the many satirical cartoons and bitter jokes about HMOs) has led to some changes. Although the HMO concept has a certain appeal, it seems that as private companies, HMOs are concerned primarily with making money. As a physician, one of us often tells his patients that health insurance companies in general “are not in it for your health.” One health care plan manager that one of us has spoken with indicated that it is more cost-effective for his company if a patient goes into the hospital and dies suddenly of a heart attack rather than receives a lot of expensive tests. As a businessman, he is correct, but such words are chilling. Doctors and patients must always be aware of the malevolent forces that work together to impede the healing process.
Legal Issues and the Culture of Blame
While we were writing this chapter, the news was reported that Dick Schapp, a longtime sports journalist, had died from complications after hip surgery. This led to questions as to the exact cause of death, what series of events led to this conclusion, and what might have been done to prevent Schapp’s demise. Unfortunately, the simplicity of these questions belies the need for further analysis of a series of events, many of which may have been unavoidable.
In the early 1990s, the authors who reported on the Harvard Medical Practice Study called for improved patient safety (Brennan et al. 1991). Since that time, however, little has been done systematically nationwide in this regard. It is apparent that the medical industry is at least 10 years behind other industries in safety innovation. In most industries the reporting of safety problems is encouraged, but in the medical industry a cloak of silence and the potential for litigation arising from acknowledgment of mistakes hinders such reporting. Thomas Krizek (2000) asserts that improvement in health care quality is inhibited by five factors: (a) inadequate data on the incidence of adverse events, (b) inadequate practice guidelines or protocols and poor outcome analysis, (c) a culture of blame, (d) a need to compensate “injured” patients, and (e) difficulty in telling the truth. We discuss each of these factors throughout this chapter.
Krizek suggests that the principles that W. Edward Deming introduced in Japan in his work on quality controls for the automotive and electronics industries might be applicable to medicine. Deming promoted his belief that it is the worker performing the task who, when appropriately empowered, is best able to identify and correct errors. Deming proved that workers could produce nearly defectfree products with the aid of well-defined protocols, early error identification, and continuous data collection. These principles have yet to be instituted on a mass scale in medicine.
Failure analysis needs to be a part of any quality control mechanism aimed at creating a safer, more efficient environment, particularly in certain industries, such as aviation, and in the military. In aviation, the Federal Aviation Administration and the National Transportation Safety Board (NTSB) oversee safety and failure analysis. When a nonmilitary plane crashes, the NTSB sends an investigation team to the crash site in an effort to find the cause. This is no different from the TQI (total quality improvement) or CQI (continuous quality improvement) analyses that many industries use in managing production. The NTSB is not concerned about potential litigation, but solely about the cause of the crash. This is not to say that the pilots’ union does not protest vigorously any findings of pilot error; however, the independence of the NTSB allows it to investigate and report findings in an impartial and objective manner.
Medicine has the equivalent of an investigation process after a death—the autopsy, or postmortem examination. Autopsy (literally, “seeing with one’s own eyes”) is a valuable tool for determining the cause of a patient’s death, but this once-common procedure is now a rarity at many hospitals. It has been shown that the rate of discordance between clinical diagnoses and autopsy results is large, despite the technological advances that have been made in medicine. Even with the advent of magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), CT scans, computer-enhanced angiography, sophisticated diagnostic ultrasonography, nuclear scans, and sophisticated diagnostic endoscopy, as well as increased precision in laboratory tests and culture techniques, medicine is still an imprecise science, and diagnosticians still miss many diseases.
Burton, Troxclair, and Newman (1998) reviewed the postmortem examinations of 1,105 patients between 1986 and 1995 at the Medical Center of Louisiana in New Orleans and found that cancers were misdiagnosed before death in 44% of the patients. The research team found 250 malignant tumors in 225 patients with 111 cancers misdiagnosed or undiagnosed. Only 34% of the cancers were clinically suspected prior to death. Undiagnosed cancer was the cause of death in 57% of those patients who eventually died of cancer. The hospital in Burton et al.’s study had a high rate (42%) of autopsy for a modern hospital owing to significant cooperation from the local medical examiner. In most areas of the United States, autopsy rates are less than 10% at teaching hospitals and less than 5% at community hospitals (Marwick 1995). These rates differ significantly from those of the 1960s, when autopsies were performed in more than 50% of deaths. The medical profession has been lulled into a false sense of security by the availability of high-tech diagnostic tools and has virtually lost the opportunity to know with certainty the cause of death in most cases. As George Lundberg, former editor of the Journal of the American Medical Association, has stated, “Low tech autopsy trumps high tech medicine in getting the right answer” (quoted in Tanne 1998).
There are probably multiple reasons for the decline in autopsy rates. The most significant of these is that most doctors either do not ask to perform or insist on performing autopsies. To take the most positive view of this, doctors assume that they know the causes of their patients’ deaths and doubt that autopsies will add any new or useful information. A more negative view is that, in this highly litigious climate, doctors really do not want to know (or have anyone else find out) if they misdiagnosed or mistreated their patients. Another plausible reason for the decline in autopsy rates is that deceased patients’ families do not understand the importance of the procedure, either because of general lack of education or because doctors fail to inform them. The autopsy is an important, and underutilized, quality assurance and educational tool that allows physicians to monitor medical diagnostics and therapeutics (Burton et al. 1998).
The Scope of the Problem
Although there are no clear epidemiological data available on the numbers of deaths caused by medical error, a disturbing trend appears to be emerging. If the numbers reported in the literature are close to correct, the problem of death due to medical error is staggering. And these numbers do not take into account those medical errors that lead to prolonged hospital stays, preventable hospital stays and clinic visits, and disability. Although many Americans have been aware for years of certain dangers associated with encountering the medical system, it is only recently that publicity about medical errors has pushed the magnitude of this problem into the public’s collective consciousness.
The single defining newsworthy event in this regard was the release in the fall of 1999 of the Institute of Medicine report mentioned previously (see Kohn et el. 2000). The Institute of Medicine was established in 1970 by the National Academy of Sciences to be an adviser to the federal government on issues of medical care, research, and education. The IOM report indicated that preventable adverse events are a leading cause of death in the United States. The IOM estimated that, extrapolating from data gathered in two studies for the 33.6 million hospital admissions in 1997, between 44,000 and 98,000 Americans die each year as a result of hospital errors. This places deaths in hospitals due to preventable errors between the fifth and eighth leading causes of death in the United States. This death toll is greater than that due to motor vehicle accidents (43,485), breast cancer (42,297), or AIDS (16,516). The national costs of these preventable deaths, which include lost income, lost household production, disability, and health costs, are estimated to be between $17 and $29 billion. Only limited data are available on medical errors in outpatient settings (clinics, outpatient surgical and diagnostic centers, and nursing homes) and pharmacy errors; if complete data were available, the number of preventable deaths due to errors could potentially approach the third or fourth leading cause of death. To put these statistics in another perspective: More than 6,000 lives are lost in workplace injuries each year (U.S. Bureau of Labor Statistics 1999), and the IOM estimates that in 1993 medication errors caused approximately 7,000 deaths (Kohn et al. 2000).
One of the leading studies to provide some insight into the magnitude of the problem of medical errors was the Harvard Medical Practice Study, in which researchers reviewed more than 30,000 randomly selected discharges from 51 hospitals in New York State in 1984 (Brennan et al. 1991). According to Brennan et al. (1991), the proportion of adverse events that occurred in these hospitals was 3.7%, and 58% of these errors were deemed to be preventable. Death resulted in 13.6% of these preventable adverse events. These findings have been corroborated by Gawande et al. (1999), who reviewed 15,000 randomly selected hospital admissions in Colorado and Utah for adverse surgical events in 1992. They found that the incidence of these events was 2.9%, of which 54% were deemed preventable. In this study, 5.6% of the surgical adverse events led to patient death.
Steel and his colleagues (1981) conducted a prospective study of 815 consecutive patients to determine the degree of iatrogenic illness in a general medical service at a university hospital. They found that 36% of the patients developed iatrogenic illnesses, and that 9% of all patients in the sample developed iatrogenic illnesses that threatened their lives or produced serious disability. In 2% of the cases, the hospital- or health care provider-induced illness either caused death or contributed to the patient’s death.
In 1997, Andrews et al. published a report on their study of 1,047 patients admitted to two intensive care units and one surgical unit at a large teaching hospital. The researchers found that 45.8% of the patients in their sample had adverse events and that 17.7% of those events produced either serious disability or death. Andrews et al. conclude that each day a patient was hospitalized increased his or her risk of suffering an adverse event by 6%. Dubois and Brook (1988) studied 182 deaths in 12 hospitals and found that as many as 27% of the deaths might have been prevented. McGuire et al. (1992) studied 44,603 patients who underwent surgery at a large medical center and found that of the 749 who died, 7.5% of the deaths were judged to have been preventable.
The data cited above indicate that the risks of dying in a hospital setting due to medical error are not insignificant. Most of the studies we have mentioned focused on death as the end-point statistic, but some researchers have reported on disability and increased hospital stays as a result of preventable adverse errors. Krizek (2000) studied a sample of 1,047 patients admitted to three surgical units and found that 480 patients (45.8%) experienced adverse events. In total, 2,183 errors occurred in these patients, of which 462 (21.2%) were deemed potentially threatening to life or limb. Of the 480 patients, 175 (17.7%) experienced at least one serious error. Krizek reports that the average length of stay for patients who did not have adverse events was 8.8 days, compared with 23.6 days for patients who experienced adverse events and 32 days for those who experienced serious adverse events.
Types of Medical Errors
The primary author became acutely aware of hospital mistakes when he first went into practice after finishing residency training. He found the hospital setting to be unpleasant, and he struggled with the hospital environment when seeing patients there. Unfortunately, some of the hospital personnel interpreted his efforts to deal with problems as his trying to blame them, and this soon created a bunker mentality of “him versus them.” One of his wiser, more experienced colleagues took him aside one day and explained that “the hospital is part of the disease and must be managed accordingly, just as the patient’s disease must be managed.” This was quite a revelation. Since that time, he has taken his colleague’s advice and worked to manage the hospital as if it were part of the patient’s disease complex, because, very practically, it is.
Due to the complexity of both medical practice and the illnesses that patients can have, medical professionals can make myriad errors. Additionally, the numbers as well as the types of errors that can occur are multiplied by the fact that patients, particularly hospitalized individuals, are the recipients of numerous procedures and treatments. For example, it has been estimated that on any given day in a hospital intensive care unit, more than 170 different activities occur to and around an individual patient. The potential for error is further compounded by the ready availability of advanced technological modalities, which translates into greater use of diagnostic and therapeutic interventions.
In one case, errant technology led to a patient’s death when the computer program controlling a radiation machine that was being used for therapy malfunctioned as the machine was being electronically positioned over the patient, and the machine delivered an overdose of radiation to the patient’s head. The patient experienced an epileptic-like seizure and died 3 weeks later (Saltus 1986). In another case that we know of, a patient who was pacemaker dependent was given an MRI. The MRI reprogrammed the pacemaker, causing it to malfunction and resulting in the patient’s death. Some may find it hard to believe that such errors actually happen in medicine, but because medicine is primarily people serving people, it is by nature a highly fallible, mistake-prone industry.
Deadly errors rarely occur in isolation. It is generally held that if a patient experiences an error early in the course of a hospitalization, he or she will usually experience a series of errors, none of which is self-correcting. This has been termed the “cascade effect” (Mold and Stein 1986). It is usually not the case that medical errors are made by lone individuals working in isolation, and studies have shown that those who make errors are usually not incompetent health care workers. In Krizek’s (2000) study, he found that only 37.8% of the time was a single individual responsible for an error, and when an individual was responsible, he or she was part of a system that ultimately either augmented the error process or failed to impede the error. Most of the time, medical errors are simply mistakes.
In a book titled How to Get Out of the Hospital Alive (1997), Dr. Sheldon Blau and Elaine Fantle Shimberg detail Blau’s personal hospitalization for heart disease and subsequent coronary artery bypass surgery at the hospital in which he practiced. It was a hospitalization, not unlike many, that was filled with errors and near misses. Blau and Shimberg explain how hospitals function and give many practical suggestions for having a successful hospitalization. They highlight the weaknesses of hospitals in an attempt to give patients more control over their environment within the hospital setting.
At one time, doctors treated cardiac dysrhythmias more aggressively than they do today, giving patients a host of medications whether they had heart disease or not. A particular one of these medications was so good at suppressing premature ventricular contractions that it was called the “PVC killer.” Doctors would prescribe this medication and see their patients’ PVCs disappear practically before their eyes. Although there was a vague awareness that such medications may have side effects, most doctors believed the benefits outweighed the risks. Then came the Cardiac Arrhythmia Suppression Trial, which found increased mortality to be associated with the use of all of these medications (Cardiac Arrhythmia Suppression Trial Investigators 1989). As it turned out, these PVC killers were in fact patient killers, and physicians stopped prescribing them virtually overnight.
Americans use a great many prescription drugs; according to the National Wholesale Druggists’ Association (1998), approximately 2.5 billion prescriptions were dispensed in U.S. pharmacies in 1998. Along with our extensive use of medications is an increasing trend in medication errors, which account for a significant portion of all preventable medical errors resulting in death. Phillips, Christenfeld, and Glynn (1998) have determined that the incidence of death from medication errors in the United States increased 257% from 1983 to 1993. They conclude that in 1983, medication errors caused 1 in every 539 outpatient deaths and 1 in every 1,622 inpatient deaths. In comparison, in 1993, medication errors increased to 1 in every 131 outpatient deaths and 1 in every 854 inpatient deaths. Outpatient prescriptions increased 139% during this time, and outpatient deaths due to medication errors increased 257%.
Unfortunately, one problem with retail pharmacies is that they generally have become so busy that pharmacists have less time to interact with patients than was the case in the past. The paradox is that both physicians and pharmacists today are better trained than ever, but because they are also busier than ever, sometimes patients get shortchanged. The problem of lack of pharmacist-patient communication is compounded by the growth of mail-order and Internet pharmacy services, whether patients choose to use them to save money or because they are required to do so by their insurance companies. It is easy to imagine how such long-distance filling of prescriptions could occasionally produce errors with catastrophic results.
Although Phillips et al. (1998) do not draw any conclusions from their study, an understanding of the time frame during which it was conducted might prove enlightening. During the period of 1983 through 1993, there was a virtual explosion in the numbers of medications and medication classes available. Because of this proliferation in numbers and types of medications, as well as Americans’ increased use of outpatient medications (related to the aging of the population as well as the development of more aggressive treatments for arthritis, hypertension, heart disease, and diabetes), the potential number of interactions and adverse side effects increased to the point of becoming unquantifiable.
As the anecdote about PVC killers related above shows, medical professionals need to be aware that jumping on the bandwagon of new medications might not be in their patients’ best interests. The fact that FDA approval does not necessarily correlate with a medication’s safety has been driven home quite clearly. The old saying in medicine that “one needs to use new medications before they develop side effects” has certainly been illustrated by several FDA-approved drugs. In March 2000, history of a dubious sort was made when Rezulin (troglitazone) and Propulsid (cisapride) were recalled within the same week by their perspective companies, under FDA agreement, because of deaths associated with their use.
Lesar, Briceland, and Stein (1997) analyzed 289,411 medication orders written in a tertiary care teaching hospital and determined the rate of significant error to be 1.81%. In such a setting, as few as 30 and as many as 60 different steps take place by the time a patient is given a medication, and even an error rate as small as 1-2% can potentially lead to significant problems. Children are at increased risk from medication errors because of their size and their lack of maturity-related organ development. A review of 101,002 medication orders at two children’s hospitals revealed 27 potentially lethal prescribing errors (Folli et al. 1987). Raju et al. (1989) studied medication errors in neonatal and pediatric intensive care units in a 4-year prospective study and found the frequency of iatrogenic injury to be 1 in every 33 admissions. Although most adverse drug events are nonlethal, some that can be classified as near misses involve inaccurate doses, inappropriate medications, or medication interactions that are potentially lethal.
It is likely that the majority of adverse drug events, even those causing death, take place without the knowledge or recognition of the health care team. The most common adverse drug events are associated with cardiovascular agents, anticonvulsants, antihypertensives, and nonsteroidal anti-inflammatory medications.
Many medication errors occur due to miscommunication. Unfortunately, in the time-compressed world in which health care professionals work, it is relatively easy for them to write medication orders that are not completely legible. It is also easy to misplace decimal points—for example, it is not difficult to write 20 mg when one means to write 2.0 mg. In his book titled Drug Death: A Danger of Hospitalization (1989), Hoffmann gives an account of a death that resulted when a patient’s dose of the anti-gout medication colchicine was interpreted as 10 mg instead of 1.0 mg. When giving verbal orders, it is easy to misspeak and say “5.0 mg of epinephrine” instead of “0.5 mg”—Hoffmann tells of such a mistake that caused the death of a patient in a nursing home. It is also easy for nurses and other health care workers to misunderstand verbal orders; Hoffmann relates a case of a near miss that occurred to a hospitalized patient when she was given 60 mg of theophylline per hour instead of 16 mg per hour. Fortunately, most errors of this kind are detected by vigilant and experienced individuals and so are prevented from causing catastrophic events.
Another common problem stems from the fact that many medications that are otherwise quite different have similar-sounding names (Davis, Cohen, and Teplitsky 1992). When Prilosec, an ulcer and acid reflux medication, was first introduced, its name was Losec. However, handwritten prescriptions for Losec were frequently interpreted as being for Lasix, a diuretic, and the FDA mandated a name change (Cohen 2000). Examples of other sound-alike drugs are Coumadin (an anticoagulant) and Kemadrin (an anti-Parkinson’s medication), Taxol (an anticancer drug) and Paxil (an antidepressant), and Zebeta (an antihypertensive) and Diabeta (an antidiabetic medication), not to mention Celexa (an antidepressant), Celebrex (an antiarthritic), and Cerebrex (an anticonvulsant) (Cohen 2000).
With the increasing availability of new medications, physicians are not always able to keep abreast of the knowledge required to use them appropriately. Even physicians’ mandatory continuing education is unlikely to keep them adequately informed about all the possible interactions and serious side effects of the many new drugs being added to the market each year. It is virtually impossible for any physician to predict all of the multiple potentially life-threatening effects of drug interactions involving new medications. Fortunately, any given potentially lethal interaction does not happen 100% of the time. Unfortunately, serious adverse drug events are most likely to happen to the sickest patients, those who can least tolerate them. This is a variation of Murphy’s Law that is well recognized in medicine.
Technical and Diagnostic Errors
One kind of diagnostic error that can occur is illustrated by a case in which one of us was involved: A hospital emergency department physician called the attending physician and said that he had a patient with abdominal pain whose blood pressure was 90/60. The emergency department doctor wanted to admit the patient and let the other doctor see the patient in the morning. The attending physician, however, thought there was something a little odd about the situation and decided to examine the patient himself. When he arrived in the emergency department, he examined the patient’s abdomen and found that the patient had an expanding abdominal aortic aneurysm that was about to burst. The patient was immediately sent to a referral hospital for aortic aneurysm repair. This case turned out well, but it likely would not have if the attending physician had relied solely on the judgment of the emergency department physician.
Another form of diagnostic error is the failure to diagnose a curable, but potentially lethal, condition properly, so that it can be treated medically. An example would be the failure to biopsy a cervical lymph node and thus miss a diagnosis of Hodgkin’s disease or other curable cancer. Misreading an abnormal mammogram or failing to biopsy a breast nodule and so missing a diagnosis of cancer are both potentially lethal errors. Even though radiologists are well trained, it is not uncommon for them to miss important X-ray abnormalities. Also, research has shown that different radiologists interpret the same X rays differently in a small portion of cases (Herman et al. 1975).
Herman et al. (1975) found that radiologists can miss important findings by misreading radiographs. In one case that we know of, a 45-year-old woman who had been coughing for 2 months had a chest X ray, and a radiologist read it as normal. In reality, however, the woman had a lung mass that turned out to be cancer, from which she died 8 months later.
Sometimes surgeons commit fatal errors by failing to remove cancerous tissue completely during breast lumpectomies or other cancer surgeries, and sometimes fatal errors stem from inaccurate biopsies of malignant tissues. Pathologists occasionally make the error of failing to see cancer cells on microscopic specimens. Unfortunately, in some cases cancer cells look very much like cells from tumors that are benign. If a cancerous tumor is misdiagnosed as benign, the cancer will not be effectively treated, and if a benign tumor is misdiagnosed as cancerous, the patient will be subjected to overly aggressive treatment for a benign condition. The latter happened in the case of one woman who underwent bilateral mastectomy when she did not have breast cancer (“Mastectomy Patient” 2003). It is obviously tragic enough to lose a breast due to cancer; it is a disaster to lose healthy breasts due to diagnostic error.
Pathology and cytology errors account for a small but significant portion of medical errors that have caused death. For example, it is not uncommon for abnormal Pap smears to be read as normal. Because there is a gradual transition from healthy cells to cancerous cells on cervical cytology, and because there may be thousands of cells on any one Pap smear slide, it is quite possible for a cytologist to overlook abnormal cells. Historically, another reason for misinterpretation of Pap smears was that labs were requiring cytologists to read too many slides. Before government regulation limited the number of slides any given cytologist can be required to read to 80 per day, many cytologists were reading hundreds of slides per day. Such practices are dangerous because operator fatigue may result in misinterpretation of Pap smears.
Although pathologists do not misread many microscopic specimens, erroneous readings of even a small number of slides within the course of a year can lead to devastating results. In one case with which we are familiar, a patient’s skin biopsy that was initially read by a pathologist as normal tissue was later read by another pathologist as cancer. This error prevented timely treatment of a lethal skin cancer. The second pathologist later admitted sheepishly that the previous pathologist had been “let go” because of a number of “misreads.” Such incidents are indeed frightening to both patients and physicians.
Kronz, Westra, and Epstein (1999) reviewed 6,171 cancer cases referred to Johns Hopkins Hospital over a 21-month period and found that 86 (1.4%) of these cases were misdiagnosed. In 80 of the cases, the new diagnosis altered the treatment plan, and in 81 cases it altered the prognosis. The diagnosis was changed from malignant to benign in 20 of the cases, and in 5 cases the diagnosis was changed from benign to malignant.
It is important to remember that a second opinion may significantly alter the treatment and disease course of a specific cancer. It is known that pathologic diagnosis is not an exact science. Studies have shown that there can be significant disagreement among pathologists in the diagnosis of a particular sample of cells on a slide. Many hospitals automatically seek second opinions within the institution on significant pathology, and if there is disagreement, an outside expert is consulted for a final determination. Most histologic specimens are read by general pathologists. These pathologists may have expertise in specific tumors, but may not be expert in the type of tumor that any one patient may have.
Another reason misdiagnosis sometimes occurs is that many biopsies are taken with thin needles because this technique is relatively uninvasive. One problem with the so-called thin-needle biopsy is that the pathologist is forced to work with a very small tissue sample, and this can result in a greater margin of error than would be the case with a larger sample. Additionally, the smaller the tissue sample biopsied, the greater the chance that abnormal tissue has been missed. Biopsy results from a small sample may give the patient and doctor a false sense of security that nothing abnormal is present when in fact the abnormal tissue has simply not been biopsied.
In one case of surgical error related to us by a colleague, a surgeon performing a laparoscopic cholecystectomy (gallbladder removal) punctured an iliac blood vessel with a trocar upon entering the abdomen. The usual procedure is to place the trocar (a piece of metal with a sharp point) into the abdomen through the abdominal wall. This requires several pounds of manual force to accelerate the trocar through the abdominal wall and a deftness of manual dexterity to stop the trocar immediately after it enters the abdominal cavity. The trocar allows the surgeon a portal through which to insufflate and distend the abdominal cavity with inert gas so that he or she can adequately view the intra-abdominal contents. Sometimes scar tissue within the abdomen causes the structures to be adhered or displaced, which then places vital organs in harm’s way. In this particular case, the patient died because the injury to the blood vessel was not recognized and corrected.
Surgical errors are among some of the most catastrophic of medical mistakes. Because of their nature, surgical errors are disproportionately associated with serious or lethal outcomes (Gawande et al. 1999; Krizek 2000). Unfortunate surgical events take many forms, including the puncturing of intestines or blood vessels and the accidental tying off of such organs as the ureter (a duct that carries urine away from the kidney). These events in and of themselves are not necessarily errors, however; the errors occur when such mishaps are not recognized and quickly repaired or corrected.
Krizek (2000) found that of the 2,183 errors that occurred during and after surgery in his sample, surgical technical errors constituted only 10.5% of the total errors but 17.9% of the serious errors. The medical literature is replete with examples of surgical mishaps that occurred when laparoscopic surgery was introduced in the late 1980s and early 1990s. It is also well-known that surgeons in training and surgeons who are inexperienced in given procedures make more technical mistakes than do more experienced surgeons.
Additional surgical errors include wrong-site operations. Since 1996, there have been more than 150 reports of operations in the United States in which surgeons operated on the wrong arm, leg, kidney, or side of the brain, and even on the wrong patient (Altman 2001). Such errors prompted the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) to issue its second alert in 3 years on this particular topic. The JCAHO now suggests that both the patient and the surgeon mark the correct site for surgery and also mark the site that should not be touched, because there have been two cases in which the correct limb was marked but the wrong one was not and the wrong one was operated on.
Government policies can influence the health of a given population, such as when they allow unsafe practices to continue. For example, in France in the mid-1980s, the Ministry of Health permitted the use of blood that was potentially contaminated with HIV when the government clearly should have known the risks that this involved. Our government is not completely innocent in this regard. In the early 1980s, the restrictions placed on blood donors were reduced; this error in government policy will most likely cost lives.
The prevention of error in medical practice is based on the concept of the “robust individual.” That is, the prevention of medical error relies on the presence of high-quality individual health care workers who can recognize and correct errors along the way as they happen. In essence, error interception is an intrinsic part of the job description of every health care professional. Medical practice has developed an aura of infallibility, which is absurd when one recognizes that the service of health care is performed by human beings, who are quite fallible. In short, people make mistakes. The attitude of society and medicine is the name, blame, and shame game, in which the person who committed an error is singled out, blamed, and punished. This strategy creates a culture in which health care workers are reluctant to admit their errors or flaws for fear they will be blamed and punished, a culture that is counterproductive to the long-term prevention of medical error.
Medicine is the only major industry in the United States that has yet to evaluate fully how it can prevent errors and improve safety. Medicine has not yet been willing or able to develop a system of full disclosure of errors, which is a necessary first step in developing a preventive strategy. Although some institutions and specialties have made some initial forays into error prevention, it is apparent to most observers that these efforts fall short of a comprehensive analysis leading to a national effort at medical error prevention.
It is axiomatic that health care consumers want the safest possible health care system. However, establishment of such a system would require full error disclosure by both health care workers and institutions, followed by failure analysis. This analysis would focus on why errors occur in an effort to develop systems for error prevention, rather than blaming and punishing the last person involved in the error process. Errors generally do not occur in isolation; rather, errors are the end results of a flawed system. Although full error disclosure is absolutely prerequisite to overall improvement in safety, there will be no such disclosure within the present culture. Nor will such disclosure occur without some level of legal protection in place for health care professionals; no one will admit error if there is a reasonable possibility that he or she will be sued. Health care consumers cannot have it both ways: They must choose between full error disclosure, which will lead to improved safety, and the freedom to sue the individuals involved when errors occur. The question the public needs to answer is, Do we want the safest health care system possible?