In the first part of this three article series on errors and mistakes in general practice, we explored the psychological basis for errors and biases in human behaviour that lead to mistakes occurring. In this second instalment, we think about the psychological limits that contribute to mistakes, and how we can use these insights to build a safer, open learning culture in medicine based around the principles of Black Box Thinking.
Published in 2015, Black Box Thinking by Matthew Syed explains the inextricable connection between failure and success, comparing the psychology, culture and institutional responses to failure and how mistakes are managed in two safety critical industries: airline and healthcare.
The book opens with this startling statistic; in 2013, out of three billion airline passengers across the globe, 210 people died in air accidents compared to 400,000 preventable deaths in the US healthcare system alone. This is the equivalent of two jumbo jets full of passengers being killed by preventable accidents every day.
Why is this happening? And why does it continue?
A routine operation
The harrowing story of a routine operation provides some clues to the institutional and attitudinal differences between these industries.
In March 2005, Martin Bromiley took his wife Elaine to their local hospital for a routine sinus operation. The first two minutes after having administered the anaesthetic were hampered by jaw muscle contraction, making oxygenation impossible using various laryngeal masks. As Elaine’s lips began to turn blue, her oxygen saturation fell to 75%. An oxygen mask for two minutes made no difference, so the anaesthetist switched to tracheal intubation. But he couldn't see the airway.
After a further eight minutes, Elaine’s sats had now dropped to 40% and her pulse to 50. A second anaesthetist, and the surgeon, arrived to assist, with three nurses on standby, one of whom had clearly grasped the severity of the situation, grabbed a tracheotomy pack and made it clear to the anaesthetist that she was ready to assist. But the anaesthetists struggled on with attempts at tracheal intubation, eventually succeeding and getting oxygen saturation back up to 90%. This had taken 20 minutes.
Martin was called, and informed that there had been a tragic complication, a one-off, and the team had done everything they could. The feeling was that this was an inevitable, unavoidable and rare complication with bad luck all-around. Elaine was severely brain damaged and died two weeks later.
Martin wasn’t angry; he didn’t blame the clinical team, but he did want answers. He did not accept that this was a rare and inevitable situation. And that was because, crucially, Martin came from an industry that took a very different approach to ‘accidents’; he was a pilot and, as he learned more about what had happened to Elaine, he realised there were close similarities to previous airline ‘tragedies’.
In 1978, United airlines flight 173 was approaching the runway in Portland Oregon, and deployed its landing gear ready for final approach. But the landing gear light, indicating that the wheels had been lowered, didn't light up.The plane was immediately put into a holding pattern so that the crew could check whether the wheels were down or not.
Like all aircraft, UA173 had two flight recorders - black boxes, one for cockpit audio and the other for data. They would later reveal how the captain continued to search for reasons why the landing gear light hadn't lit up. Were the wheels jammed? Or were they down but the light was just not working? The flight engineer could be heard regularly reading out the plane’s fuel reserves, which had minutes left before the engines would fail. But the captain continued to be immersed in searching for the cause of the problem rather than risk landing without full landing gear.
When the flight engineer, by now in great distress, finally told the captain that they needed to land immediately as the fuel had run out, they were still eight miles away from the runway and circling over a busy city. The captain can be heard to be clearly surprised, and replied that he believed they still had several minutes of fuel left.
The plane crashed killing eight people, including the flight engineer who had tried to raise the alarm with his superior officer.
The response to this incident, compared to that of Elaine Bromiley, is striking in itself: there was a statutory, immediate investigation into the crash. Historical and real-time data was used, including data from the the Black Boxes, enabling comparison with historical incidents. Everyone involved freely gave cooperation, as the immediate assumption was there must be a system problem. Any evidence given to the independent enquiry was inadmissible in a court. Within weeks of the report being completed, its findings were disseminated throughout the entire aviation industry, with recommendations for change to protect air crews and their passengers in the future, with checklists, tools and training simulations. In the time since these changes have been implemented, the spate of crashes that were so typical of the 1970s have declined significantly.
This is Black Box Thinking in action, and the perfect illustration of an open loop evolving learning system.
“So that others may learn, and even more live”
In contrast, thirty years later in the NHS, Martin Bromiley had faced an uphill struggle in trying to learn what happened to his wife and apply that learning to protect others.
When the report he fought for was finally published, the findings and recommendations were startlingly similar to the Flight UA 173 case 30 years earlier, revealing a system that did not allow for the limits of human psychology. The powerful effect of social hierarchy, which had prevented both the flight engineer from assertively challenging his captain about the fuel reserves and the nursing staff from shouting for a tracheotomy, were clearly evident. The other signature was the narrowing of perception and loss of awareness of time passing that affected the captain of Flight UA173 as he focused on solving the problem of the landing gear, and the anaesthetists who intently and repeatedly tried tracheal intubation, and were later horrified to learn that their patient had been severely hypoxic for a full 20 minutes.
Martin went on to found the Clinical Human Factors group, a charity dedicated to preventing these types of medical errors happening. As a result of his work, anaesthetic training and procedures have been transformed with simple but powerful use of regular time checks during emergencies, clear definitions of team member roles and the breaking down of social hierarchies to allow any team members to call out a risk or obstacle that could harm a patient. Martin receives weekly messages from clinicians from around the world that his work on challenging failings, that were once seen as inevitable and acceptable complications, has deeply affected their training, and saved ‘incalculable’ lives from harm and death. A humbling example of an open learning culture in one corner of healthcare.
Failure and error in healthcare
But on the whole, how well has healthcare since progressed towards adopting Black Box Thinking and a ‘growth’, open-loop, open culture?
There has been some rhetoric around developing a no-blame culture, but the culture still remains doggedly stuck with individualising blame for the failures of an over-stretched system with doctors and healthcare staff being held personally, and even criminally, responsible for system failings.
The prevailing attitude to error seems best summed up by Professor James Reason, professor of psychology at the University of Manchester, and a leading researcher and thinker in human error, who was involved in formulating the Swiss Cheese Model of error causation and prevention.
“This is the paradox in a nutshell: healthcare by its nature is highly error-provoking - yet health carers stigmatise fallibility and have had little or no training in error management or detection.”
So why are health carers bad at learning from errors?
It’s worth examining Professor Reason’s Swiss cheese model to appreciate the attitude to error that should flow from this. The Swiss cheese, representing the layers of error protection, and the holes the gaps that can allow errors through, recognises the absolute certainty that human beings will make mistakes, and so seeks to design systems and environments that work around our human nature to reduce the risk of error as far as possible, protecting both the workforce and patients from errors.
As we’ve seen in the case of UA173, the assumption is that if an error occurs, it must be because there is an unacceptable alignment of the holes in the error protection layers which needs identifying and closing.
So what stops us applying this approach in healthcare?
The external factors
There exists amongst the public, and even within our profession, a powerful instinct to condemn errors. Especially in high stake situations where the outcome of decisions or actions befalls a third party - in the case of healthcare, a patient. This stems from the very human tendency we have to leap to conclusions based on incomplete evidence - the What You See Is All There Is (WYSIATI) described by Kahneman - and a collective Hindsight Bias that judges professionals on outcomes, not their thought process or actions in the scenario they found themselves at the time. As humans, it is psychologically more satisfying to create simple narratives when faced with complexity. And when there is a highly complex series of interacting, multi-system failures, our drive is to instead seek a simple solution, preferably embodied in a culprit we can blame and deal with to restore our sense of the world being controllable. As we saw in the previous article of this series, this is our deep System One (S1) need for coherence and stability.
The internal factors
The public like their doctors to be infallible when they need us and things go well. How often have you heard a patient attribute great healing powers to an individual clinician, despite the enumerable factors in that patient’s journey that actually contributed to the patient’s recovery? The flip side is that, when something does go wrong, we are quickly held “accountable” for factors not within our direct control.
As individual doctors, and as the medical profession, we have to accept that we ourselves may contribute to this illusion of infallibility which can quickly sour to total fallibility.
As GPs, we tend to be individuals who are high achieving in our early lives: we excel at school; we are told we are inherently clever because we hit targets; and then we go to medical school where the pressure is ratcheted up by being surrounded by more people like us, who got even better grades than us. We learn to judge ourselves on outcomes. If we fail at something, we are judged by others, but we also judge ourselves, and it can feel like a personal failing, not a valuable experience to learn from. We spend our formative years in this feverish environment with little or no experience in the true complex nature of failures and how we can learn from them. The irony being that we are studying medicine which has progressed through application of the scientific method - the supreme example of an open learning culture with progress and success evolving through a series of failures and negative outcomes. But none of this seems to rub off on us when it comes to considering our own performance and practise and, more widely, how the fruits of medicine are delivered in health care settings.
And then we are doctors: a group of individuals with little or no training in error causation and management, and a heightened internal fear of failure, working in a system that does not recognise our limits as humans. Bad outcomes are either someone else’s fault, hopefully not ours, or they are just one of those unfortunate things that happen occasionally - precisely the attitude that met Martin Bromiley when his wife died.
And it is all entirely subconscious; it’s not about consciously covering up errors to avoid external consequences. Adding to the psychological arsenal of WYSIATI and hindsight bias discussed in the first article is another phenomenon talked about by Kahneman, that of cognitive dissonance.
This stems from our instantaneous, automatic S1 thinking and is a powerful mechanism for hiding our failures and errors from ourselves to protect our self-esteem, our sense of being “competent”.
It is so subconscious that we often don’t realise that we are denying errors. Like a person with dementia, incapable of making new short-term memories, we just don’t see the failure in the first place. Take the example of the clinical team in the case of Elaine Bromiley. They were not being dishonest when they initially explained her death as a rare, but unavoidable, complication. Perhaps many of us at first glance would have agreed. But as we’ve seen, opening up to the idea that this needed investigation revealed areas where system improvements were needed. This caused a leap in progress which has protected both patients and the doctors, who are often the “double victims” in poorly designed systems.
In our third and final article on errors and biases in healthcare, I’ll illustrate the points we’ve made with a story of a locum session that culminated in the locum being blacklisted by a practice through a series of unfortunate events that, had a Black Box Thinking approach been made, the practice may instead have evolved new processes which would have improved the working environment for all their staff and locums, and improved patient safety.
Richard has worked as a freelance GP locum since 1995 in around 100 different practices, living and working in West Sussex and Hampshire. He founded NASGP in 1997, he is NASGP’s chairman and started the UK’s first locum chambers in 2004.
He enjoys walking, reads too many books on behavioural economics and has an unhealthy obsession with his sourdough starter.