Chapter 9. Quality and patient safety: How do we get there from here?
Rebecca Warburton
Introduction
The journey is turning out to be more complicated than many hoped. There are no ‘quick fixes’, simply pointing out the problem has not changed things. Insights into the reasons why this is so are coming from many other fields including from complex systems, human factors engineering, high-risk industries such as aviation and nuclear power, and from studies into organisational culture and change.
Most healthcare organisations still lack basic measurements of the scope of the safety problem in their own settings. While the previously common ‘don’t ask, don’t tell’ code of silence is fading, recent infant heart surgery scandals in Winnipeg and Bristol (Gillies & Howard 2005) show that it isn’t gone. Similarly, the blame–shame–name culture that focused on weeding out the ‘bad apples’ (the person approach) is giving way to the systems approach (where it is recognised that systems, not careless individuals, create most patient harm) but the shift is far from complete. There is cause for hope, but also a great deal that needs to be done for healthcare to realise its full potential of creating benefit for patients and avoiding harm.
Awareness of the need for improvement now extends well beyond the services that deliver healthcare. Quality and safety recommendations and requirements from accreditation agencies, government departments, drug and equipment vendors and improvement organisations regularly inundate hospitals and other healthcare provider organisations. Yet a key problem for administrators and clinicians is that little guidance is available about the relative priority of desired changes. Other chapters in this book consider quality and safety improvements from the perspective of those working in the organisation that are charged with the responsibility to improve performance. This chapter proposes an economic model to prioritise improvement, recognising that not everything can be done immediately, and that solid research evidence does not yet exist on the costs and effects of many potential improvements.
The chapter argues that if we are to gain the maximum reduction in harm for the resources that hospitals have available to improve safety, we need to collect and use cost-effectiveness evidence both to prioritise proposed safety improvements and to target new research. The chapter:
▪ explains the rationale for the need to prioritise quality and safety initiatives
▪ describes the methods needed to produce evidence of cost effectiveness
▪ suggests a number of promising starting points to improve healthcare safety in the absence of good evidence.
How much safety is enough?
The dictum of ‘first, do no harm’ drives healthcare providers to seek perfect, harm-free performance. Yet modern medicine, like modern life generally, cannot always be made perfectly safe. Holding perfection as an ideal is inspiring and perhaps necessary to discourage complacency (Berwick 2001), but there can be great harm in trying to achieve perfection because near-perfection often imposes near-infinite costs. The closer we get to perfection in any particular area, the more likely it becomes that we could have achieved ‘a better bang for our preventive buck’ somewhere else (Warburton 2003). Therefore actions must be prioritised, and evidence of benefit and cost is needed to do so.
Economists count as a cost anything of value that any person gives up, including money, time or pleasure. Leading advocates of evidence-based medicine have adopted this approach, and suggest that physicians think of costs as ‘other treatments you can’t afford to do if you use your scarce resources to do this one’, noting that ‘when internists borrow a bed from their surgical colleagues in order to admit a medical emergency tonight, the opportunity cost includes tomorrow’s cancelled surgery’ (Sackett et al 1997:100).
In healthcare terms, the costs of preventable adverse events are the value of the lost quality-adjusted life years (QALYs) of patients harmed plus the QALYs and lifetime productivity lost by health professionals unfairly blamed for ‘committing’11 an error plus the value of the time and resources used trying to mitigate or reverse harm, analyse the error and order compensation (i.e. treatment, risk analysis, legal and court costs). 12 The costs of preventing these events include the direct costs of resources put into safety-improving initiatives, which could otherwise have been spent on healthcare plus the value of QALYs and other resources lost because of delays or new errors caused by safeguards. Decisions to commit resources to improve safety should be undertaken thoughtfully, because successive improvements in safety generally impose progressively higher costs for each increment of improvement gained. Consider the simplified model in Figure 9.1 showing the costs (or value) of both prevention and adverse events at various levels of safety precaution. (Only the costs of preventable adverse events, due to errors, are included in the figure.)
12From the point of view of society as a whole, amounts paid in compensation do not count, as they are simply a transfer from one party to another, though of course these payments matter very much to those who pay or receive them.
Figure 9.1 |
So how much safety is enough? If we consider only preventable adverse events and ignore the cost of improving safety, then perfect safety should be the goal. Once we consider the costs of prevention, though, we can see that we want to minimise the sum of the costs of adverse events and prevention. This sum is the dotted line in Figure 9.1 and it is U-shaped; we have enough safety at the bottom of the U. Improving safety beyond that point imposes marginal (extra) costs (from new safety precautions) that are larger than the marginal benefits (savings from newly avoided preventable adverse events).
A note about the shape of the curves in Figure 9.1: as drawn, the figure implies that safety improvements are adopted in order of cost effectiveness. Starting at the left, the steep slope of the dotted line indicates that total cost decreases rapidly as we first begin to improve safety, because initial safety improvements are relatively cheap (prevention cost increases very little) and quite effective (preventable adverse event cost drops significantly). The dotted line flattens out as we move towards the bottom of the U because later safety improvements cost a little more, and produce smaller benefits. Past the optimal level of safety (bottom of the U) the curve turns up; total costs increase because extra safety improvements are now quite costly yet produce only small benefits (little reduction in adverse events). This trade-off gets continually worse as we increase safety further past the optimum.
Arguably, healthcare safety will need to increase dramatically before we will be anywhere near the optimal safety level at the bottom of the U. But even in our current situation, we need to remember that every action comes at the expense of not doing something else. In the rush to improve patient safety, we need to remember that putting more effort into safety will likely force hospitals to put less effort into something else, at least in the short run.
Costs and benefits – Why economic evaluation?
If more safety means less of ‘something else’, it becomes essential to ask: Which something else? What must we give up to make healthcare safer? There is no one answer to this question. For example, different safety improvements may either:
▪ cost very little, and create huge healthcare savings by avoiding patient injuries that can be corrected, but at great cost (Berwick 1998)
▪ cost very little, and save lives, but produce little in direct healthcare savings
▪ cost a great deal, and produce only small reductions in risk
▪ delay care or create new risks (Patterson et al 2002)
▪ increase workload, reduce employee satisfaction and make future changes harder
▪ streamline work and make it more rewarding, increasing employee buy-in and making future changes easier.
In clinical terms, as Merry notes subsequently in Chapter 11, extra safety checks may delay care and reduce patient throughput, endangering patients in situations where timeliness is essential. Without a fair assessment of the costs and effects (both intended and unintended) of proposed changes, it can be difficult to set priorities and impossible to know whether the best choice has been made. Yet most of the research now planned or in progress ignores costs and looks only at effects. This must change if we are to make sensible choices. We can turn to established methods in economic evaluation and technology assessment to inform these decisions, because the need to compare costs and effects is not unique to error reduction.
Pause for reflection
What do we give up when we try to make care safer? Do unintended consequences reduce the benefits in terms of safety and quality?
Setting priorities
When costs are not considered before action is taken, higher than acceptable costs can be the result. Most health regions can adopt only a fraction of recommended improvements due to constrained finances and limited staff time to safely implement change. This creates a danger of not adopting improvements in order of their cost effectiveness, and real examples exist: the universal precautions (an occupational health and safety initiative recommended by the US Centers for Disease Control to prevent worksite transmission of HIV to healthcare workers) cost from $100,000 to $1.7 million per QALY. 13 These precautions have been widely implemented in the US and Canada, yet many vaccines with much lower costs per QALY remain underutilised (even by Canadian regional health authorities responsible for both hospitals and immunisation).
13Original estimate of C$8 million to C$128 million in 1990 per case of HIV seroconversion prevented from Stock et al 1990; updated and converted to US$6.5 million to US$104 million in 2002 by the author (Warburton 2005); cost per QALY based on 60 lost QALYs per seroconversion.
Because human preferences must be considered in setting priorities, it is not possible to set rigid rules for the ‘correct’ value of a QALY. Despite general support for the use of QALYs in setting priorities (Bryan et al 2002) allocation preferences are affected by circumstances and patient characteristics (Schwappach 2002). Most people value fairness and are willing to sacrifice some QALY gains to achieve more equality in access (Schwappach 2003). Perhaps healthcare workers are seen as more deserving of protection since they risk exposure to disease in their role caring for others. Alternatively, healthcare employers may have been willing (as in the case of universal precautions mentioned earlier) to incur high costs for prevention because of the difficulty and high costs involved in compensating and replacing skilled employees infected with HIV at work. The argument that the high cost paid to protect healthcare workers reflects preferences would be more compelling, however, if there were evidence that the high costs had been known before universal precautions were implemented; a credible rival hypothesis is that many non-optimal choices result from lack of information. (If employers believed employees to be deserving of special treatment, it means they understood that universal precautions were expensive, but felt the benefits outweighed the costs; alternatively, they might have made the decision not realising how expensive it actually was. It can’t be both ways.)
While preferences might mean that we would not adopt improvements in exact cost-effectiveness order, we still would not expect huge discrepancies. Hence, it is hard to argue against estimating costs per QALY before (rather than after) initiating costly safety improvements.
Adopting costly, ineffective safety improvements when inexpensive but effective ones are available would be a double waste of resources; more would be spent than necessary on safety, and less obtained (in benefits) than was possible. In the US and the UK, reducing errors is seen as central to improving quality while controlling costs (Barach & Small 2000). Yet this hope is unlikely to be realised unless research examines both the costs and the effects of specific changes intended to reduce errors.
Buy Membership for Internal Medicine Category to continue reading. Learn more here