Discussion
Intraoperative deviations in care occur routinely—by our estimates, once every 79 minutes during complex procedures. At the root of these deviations, both provider and organizational/environmental factors predominate; we observed suboptimal team dynamics and poorly designed systems causing delays, and compromising patient safety. These findings are in keeping with previous work on human factors, both in and outside of medicine.
In surgery, the contribution of various provider factors to deviations is well-established. Communication, for example, is an often-cited provider factor in both safety compromises and delays. On the basis of intraoperative observations, Lingard et al estimated that 31% of all procedurally-relevant communications in the OR fail, with 36% of these failures impacting case flow (18% cause inefficiency, 8% delays, 2% workarounds, 2% resource waste) and/or patients (2% inconvenience, 1% procedural error). Mazzocco et al demonstrated an increased odds of complications or death with low intraoperative information sharing. Higher levels of positive communication were correlated with lower risk-adjusted mortality rates across Veterans Affairs medical centers. Similarly, absence of the attending surgeon at the beginning of the operation has been highlighted in a number of studies of OR delays.
The impact of organizational/environmental factors on intraoperative processes has also been recognized. Many have published their stories of OR efficiency improvements based upon changes in infrastructure and/or policy—for example, those regarding the allocation of human resources, block time release, and standardization of booking processes or instrument trays. Equipment problems are particularly common in the OR. In their intraoperative observations, Healey et al noted that unavailable or nonfunctional equipment was the distraction/interruption most likely to interfere with the case, whereas Wiegmann et al traced 11% of all surgical flow disruptions to difficulties with equipment or technology.
To our knowledge, no previous studies have attempted to determine the relative importance of specific provider or organizational/environmental factors in precipitating deviations or characterized their role in mitigation. Nearly all of our deviations (97%) were salvaged by providers; the organization and environment played no direct role in recovery in the cases that we observed. Such data seems to directly contradict the prevailing dogma. Our understanding of error in medicine has paralleled that in the human factors world, albeit with several years' lag time. Initially, in their field, as in ours currently, adverse events were blamed on human fallibility, and emphasis was placed on the construction of standardized systems with safeguards to protect us from our own inconsistent performance.
Although this assumption became the impetus for many successful safety advances such as checklists and technological adjuncts for tracking sponges, the human factors field has since experienced a sea change. To follow-up his 1990 treatise on failure and the limitations of human cognition, "Human Error," James Reason recently published "The Human Contribution: Unsafe Acts, Accidents, and Heroic Recoveries." Human factors experts like Reason now recognize the power embedded within human variability: our ability to adapt. The concept of human resilience is beginning to emerge in health care.
In their observational study of 243 neonatal arterial switch operations, deLeval et al found that compensation reduced the risk of death in the event of major (potentially life-threatening) intraoperative errors. In a case study of patient load increases in the emergency department, Anders et al described individual personnel as the source of resilience; they alone recognized strain on the system and recruited additional resources to buffer it. Patterson et al detailed the role of collaborative cross-checking in detecting and recovering medication error incidents. Protocol-driven cross-checks were noted to be relatively ineffective; resilience was largely induced by people outside of their official roles and responsibilities.
System redesign is unquestionably an important tool in preventing and mitigating deviations. Some of the deviations we captured in our study could have been prevented through system modifications—particularly those that were attributable to the organization or environment, like the unnecessary blood order or the inopportune shift changes—thus obviating the need for provider-driven rescue. However, deviations are, to some degree, inevitable in the practice of medicine, where the potential knowledge base is limitless and presentations and circumstances constantly change. Academic tertiary care centers, for example, are bound to encounter physiologic fragility or provider inexperience, no matter how thoughtfully their support systems are designed. Therefore, as we continue our work to improve safety and efficiency in health care, we must simultaneously build systems that will help us avert deviations and train providers to anticipate and deal with those that are unavoidable.
We posit that the best-designed systems incorporate a degree of flexibility to accommodate positive fluctuations in the performance of its providers. Indeed, attempts at system redesign may have unintended consequences. Standardization, for example, so often the product of systematic change, may prove constricting in certain situations when the opposite is required. We have previously reported on the deleterious impact of disabling protocols and suggest that hospitals examine the downstream impact of all new procedures postimplementation, with particular focus on changes in provider functionality. In order for providers to flexibly adapt as deviations arise, the environment must be conducive to exploration. The importance of a culture of safety in this regard cannot be understated.– For any safety initiative to gain traction and achieve meaningful, lasting results, it must have at its foundation, support. Safety must be institutionalized—valued by hospital leadership and administration, as well as individual providers. As has been said repeatedly by human factors experts, safe is not something organizations are, but something they do. Safety comes with an ongoing commitment and willingness to change.
As in any observational study, the Hawthorne effect may be a concern. However, we believe the use of video, with equipment placed as nonobtrusively as possible, minimized this risk. We suspect that its impact on our findings was smaller than that of a similar study using field observations, in which the presence of live personnel in the OR may serve as a constant reminder of the ongoing study. The operations we recorded were long and often arduous, absorbing the full attention of the teams. As the discussion in the room occasionally touched upon confidential topics, it became apparent that participants forgot about the recording in the course of the case. Thus, we believe that our capture is naturalistic. However, even if the teams remained aware of the study throughout the case and adjusted their actions accordingly, our estimate of deviations—our primary outcome—is a conservative one.
Because all recordings were performed at a single institution, generalizability may be a concern. As an academic, tertiary-care hospital, we may treat more complex patients; however, as patient factors contributed a relatively small amount to the evolution or the recovery of deviations, this difference in case mix is likely immaterial. Although we have a strong safety culture, our hospital does not employ the system innovations reported by other groups, and team training efforts here are not yet widespread; thus, we doubt our results would differ significantly from those of a typical academic tertiary-care hospital. As others in the surgical literature have previously reported the importance of various provider and system factors—communication or leadership, for example—in the OR, we suspect the themes highlighted by our data are universal.
We developed a novel coding system for identifying and classifying deviations. Benefiting from the input of a multidisciplinary team, we based our decisions upon discussion and consensus, rather than attempting to achieve concordance between 2 trained coders. Existing global rating scales for provider performance and environmental distractions, while seemingly easier to report and/or replicate, were considered, but ultimately discarded for their insufficient interrater reliability, which we surmise stems from their ambiguity. Likert-type rating scales cannot encompass the nuances in human interactions with each other or their environment. Furthermore, our study teams (both the core research team and the clinical domain consultant team) were constructed with the goal of representing multiple, equally valuable perspectives. The discussion that preceded consensus allowed us to achieve a nuanced understanding of each deviation. If we had, instead, required the agreement of independent coders to designate deviations, the highly informative, domain-specific data generated by any single expert's reading of an event (eg, human factors, nursing, anesthesiology) may have been lost. Finally, as consensus was readily achieved despite the disparate backgrounds of our study team members, we expect that the results are indeed reproducible by others.
As Jeffcott et al state, tools for measurement in resilience engineering are currently lacking. Sheridan further explains that because resilience is in its early stages of development, the approach to it is more likely to be qualitative than quantitative. As such, there were no existing methodologies to aid us in our characterization of error mitigation in the OR. We feel that our methods, which combined qualitative and quantitative techniques and allowed us to retain a rich level of detail in our data, were best suited to our purposes.
Implications
Using video, we captured and deconstructed episodes of safety compromise and delay, as well as their recoveries, in the OR. These events are common, and, contrary to popular belief, are not purely the consequence of human imperfections allayed by protective systems. Although providers contributed to deviations (along with system and organizational factors), they were also an important source of recovery. Such human resilience has been recognized outside of surgery and will be a critical component of surgical safety in the future.
To date, the approach to improving surgical safety has focused primarily on error prevention. Although such interventions have led to marked improvements in safety, our results suggest that increased standardization is only part of the solution. Error mitigation is a complementary approach to patient safety; in this model, practitioners, with the uniquely human capacity for adaptation, are a primary source of resilience in an imperfect system. To cultivate this quality, we must design systems and promote a culture able to accommodate it, taking care not to restrict providers' ability to improvise with overprotocolization. In doing so, we will not only minimize deviations in care but also learn to recover from them when they inevitably occur.