Article Text

Download PDFPDF

The Coase Theorem and patient safety
  1. David Meltzer
  1. Dr David Meltzer, Department of Medicine, Department of Economics, and Graduate School of Public Policy Studies, The University of Chicago, 5841 S Maryland MC 2007, Chicago, IL 60637, USA; dmeltzer{at}medicine.bsd.uchicago.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Coiera and Braithwaite1 suggest a daring new “market-based” strategy to improve patient safety (see page 99). Their cap and trade idea is predicated on the assumption that some level of preventable adverse events in health care is appropriate but that the level of adverse events should be set by balancing the harms produced by the adverse events and the cost of avoiding them. Recognising that the costs of avoiding adverse events may vary across settings, they suggest that a cap and trade system could be used to make healthcare providers (or the systems they work for) pay for the harm resulting from those adverse events through a system in which they must pay for the right to produce (or have produced) those adverse events at a price that would be set by a market based on an agreed upon amount of adverse events for the system as a whole. The attractive aspect of the logic of the system they propose, like the cap and trade system for pollutants, is that those entities best suited to reduce errors at low cost would have incentives to do so, while those entities that find it most costly to reduce errors would not spend an excessive amount of resources attempting to eliminate them. From this perspective, the proposed system would seem to be attractive because it would be expected to produce the agreed upon level of error at the lowest cost.

Unfortunately, regardless of any attraction of the proposal from that perspective, a variety of fundamental challenges arise. First, setting the right level of errors would be exceptionally difficult. Presumably, the amount of error allowed would have to be set based on some estimates of the incremental harm produced by those errors and the incremental cost of reducing them. Neither of these figures is readily identifiable. In the absence of such data, setting the wrong total level of errors could give providers either excessive or insufficient incentives to reduce errors. Many other concerns can be identified. For example, even if the total level of errors could somehow be rationally based on the harms of errors and the costs of reducing them, it is not clear how errors could be reliably measured and converted into meaningful units. Another challenge is that performing procedures that carry risks would increase in cost for providers, and eventually for payers, even when the benefits of a new procedure far exceeded the costs. For example, imagine that a procedure, say nerve-sparing radical prostatectomy, costs $10 000 to perform and has $10 000 in potential adverse event (say impotence occurring in one-third of men producing a harm of $30 000 for each affected man) but $100 000 in potential benefits. With the cap and trade model, the cost of that procedure would rise in price from $10 000 to $20 000. Such costs would have to be paid by public or private insurers, or patients seeking the procedure, to maintain the incentives of providers to perform the procedure. This increase in price would raise healthcare costs, likely lowering the ability of some patients to afford the treatment. In essence, the cap and trade model would build the harms of these preventable adverse events into the price of the service. This could be an especially large problem if preventable events were not always completely preventable (or identifiable as preventable). In such cases, the cost of healthcare could go up by far more than the cost of the actual events prevented. A related problem is that it is unclear if patients would be compensated for harm they experienced (ie, it is unclear who would receive the revenue from these trades). If patients were still able to sue for adverse outcomes, providers would effectively have to pay twice for any preventable adverse event. Moreover, patients who did not care about the potential complication would not have the opportunity to have the treatment at a lower cost, unless they were able to write contracts with their provider to forego any compensation if they have an adverse event, which seems unlikely to happen. All these problems could be especially severe for patients at increased risk of adverse events, and the cap and trade system would likely create powerful incentives to avoid providing care to patients at higher risk of adverse events even when the patient felt the added risk was worthwhile. While risk adjustment schemes might be devised to assess risk—at both the patient and procedure level—and safety credits could be given for procedures that would offset these potential costs of errors, the development and implementation of such adjustment systems could be very challenging. Providers could, for example, choose to selectively treat patients they can identify as at lower-than-estimated risk in order to gain their safety credits, while avoiding patients who they identify as at higher than estimated risk. Policing a system such as this would be neither easy nor inexpensive.

The importance of these many problems is compounded by the fact that the fundamental case for cap and trade is not clear in this setting. In the case of pollution credits, there is a clear commodity (eg, particulate matter in the air) that is measurable and is both produced by and affects a broad range of users, none of whom experiences the full costs or benefits of changes in the level of pollution. It is the presence of large levels of such “externalities” that makes the case for government regulation of pollution, for which cap and trade is one solution. Critical here is that property rights are not adequately defined. An economic theory known as the Coase Theorem,2 developed by University of Chicago economist, Ronald Coase, who received the Nobel Prize for this work, argues that, had property rights been well defined, there would be no need for government to step in for this purpose. For example, if the polluter were a resort owner dumping sewage into the swimming pond on their land, the owner would clearly not have engaged in this activity. The point is important in healthcare because it highlights the extent to which the harms of errors are borne by identifiable parties as a critical factor in whether a cap and trade system makes sense. In particular, since the harms of errors are borne primarily by patients and those who pay for their healthcare, the market should provide the right incentives to reduce harm as long as information is present. For example, the surgeon who has a higher-than-average error rate will have a hard time finding either patients or payers interested in allowing them to operate. Similarly when payment is prospective, such as for a health maintenance organisation or a hospital paid prospectively, the healthcare provider will often already bear the costs of preventable adverse events. In short, incentives for patients and payers to avoid errors through competitive market forces are already present, making the case for a cap and trade system much less compelling in the case of medical errors than in the case of pollution.

I suspect that the “economic perspective” of the cap and trade idea will be viewed as too radical from the perspective of many in the patient safety community, and not well justified from the perspective of economics, as incentives in competitive market forces would seem to go far in addressing this. As a result, I see little likelihood that anything like it would ever be adopted. That said, there is one aspect of the system I find compelling, and that is the basic assumption that some level of error is appropriate because reducing error is costly. Some rhetoric of the patent safety movement seems to periodically forget this simple fact known all too well by front-line healthcare managers and providers. Until the patent safety movement truly embraces the need to ensure that the actions to reduce errors can be costly and that those costs need to be evaluated against their benefits, whether through the explicit calculus of cost–benefit or cost-effectiveness analysis, or more implicit assessments embedded in competitive market forces, radical ideas such as cap and trade will continue to be worth discussing.

REFERENCES

Footnotes

  • Competing interests: None.

Linked Articles