Article Text

PDF

Decision technologies and the independent professional: the future's challenge to learning and leadership
  1. Jack Dowie, professor
  1. Public Health and Policy Department, London School of Hygiene and Tropical Medicine , London WC1E 7HT
  1. Professor J Dowie jack.dowie{at}lshtm.ac.uk

Abstract

Most references to “leadership” and “learning” as sources of quality improvement in medical care reflect an implicit commitment to the decision technology of “clinical judgement”. All attempts to sustain this waning decision technology by clinical guidelines, care pathways, “evidence based practice”, problem based curricula, and other stratagems only increase the gap between what is expected of doctors in today's clinical situation and what is humanly possible, hence the morale, stress, and health problems they are increasingly experiencing. Clinical guidance programmes based on decision analysis represent the coming decision technology, and proactive adaptation will produce independent doctors who can deliver excellent evidence based and preference driven care while concentrating on the human aspects of the therapeutic relation, having been relieved of the unbearable burdens of knowledge and information processing currently laid on them. History is full of examples of the incumbents of dominant technologies preferring to die than to adapt, and medicine needs both learning and leadership if it is to avoid repeating this mistake.

Statistics from Altmetric.com

Key messages

  • “Clinical judgement” is still the dominant decision technology in medicine.

  • Guidelines and care pathways represent attempts to improve the quality of care within this technology.

  • Doctors are being placed under increasingly unbearable stresses and burdens by these attempts.

  • Clinical guidance programmes based on decision analysis offer the decision technology that can enable independent practitioners to deliver evidence based, preference driven, cost effective care, and have a life.

  • Leadership and learning should be directed to proactive adaptation to this coming technology, not attempts to sustain the waning one.

As an involved outsider one observes the stresses and irritations of doctors confronted by increasing demands to implement “clinical protocols” within “managed care pathways” and to work as “team members” within a “multidisciplinary system”. One sees and hears them wondering what happened to their much vaunted ability to deal autonomously with the individual patient on the basis of their clinical experience and expertise. To help them make sense of what has happened and open up the possibility of their recovering what they have lost and will lose more and more, it is essential to stand well back and see these developments in historical perspective. What follows is necessarily a highly personal interpretation and prognosis.

From one perspective, human history is the story of waxing and waning technologies. Through early millennia the main focus of these technologies was the production of basic sustenance. Later, other sources of human satisfaction—the desire for less basic nutrition, for psychological and physical stimulation, and to avoid physical effort—have driven technological advances and the major socioeconomic changes that have always accompanied them. (Nobel laureate Robert Fogel has recently shown that improved basic nutrition has remained an underpinning condition of economic progress in the West even in the last century.) In recent times the evolutionary pace of change in industrial technology has become revolutionary and been accompanied by equally rapid changes in service technologies, notably transport and communications. The barely credible rate of developments in information technology (IT) is currently the focus of attention.

All these types of technological change have profoundly influenced medicine and healthcare services, not least IT which is often now seen as the key—even “magic bullet”—for delivering higher quality health care to populations whose expectations are increasing, if anything, more rapidly than technology is advancing.

As technologies wax and wane there are beneficiaries and sufferers. The long term beneficiaries have typically been the consumers while the long term sufferers have typically been the producers—the bosses and the workers—who invested their financial or human capital in the waxing phase during which they usually flourished (relative to their own group, of course.) However, if they were incapable of adaptation to the successor technology through myopia, unwillingness to write off sunk costs, inability to undertake the necessary geographical relocation, or any other reason, they eventually died. Thus, the handloom weavers and many of those who employed them expired with the arrival of the weaving machine; the stage coach operators died out with the arrival of the train and bus; and the hot metal printers of Fleet Street, who held out against computer typesetting for many years (in collaboration with a cartel of employers not willing to bite the bullet), disappeared. The examples are endless and the associated human stories often sad if not tragic. But one must not confuse human compassion for the individual and community tragedies that often accompany technological change with the need to accept “adapt or die” as the basic lesson. Unfortunately, slow death has usually been more prevalent than—perhaps preferred to—proactive adaptation.

One type of technology, decision technology (DT), has until now remained relatively immune from change, not least among professions such as medicine. Decision technologies (ways of making choices) must be carefully and clearly distinguished from information technologies (ways of making choosers better informed). However, this immunity is under threat and doctors and others who are empowered to act as decision making “agents” because of their superior knowledge/evidence/information will be the greatest sufferers from the DT revolution, even if they successfully cope with the IT one, unless they show themselves much more ready to adapt to new DTs than is currently evident. If they are ready to change and can change fast enough, the future for the independent practitioner—albeit necessarily operating within a constraining system of evidence and decision making principles and aids—is relatively rosy.

Decision technologies: an illustration

To make this discussion less abstract reference is made to a recent study1 which focused on the decision faced by premenopausal women who had already decided to have a hysterectomy and were faced by the consequential choice of whether or not to have their ovaries removed during the operation, as well as by the associated decision of whether or not to go on to HRT, and for how long, under each scenario.

At the time the study was designed prophylactic oophorectomy (PO) was fairly common; indeed, the feeling that there was an excessive number of such operations being performed was one motivating factor. Three years later when the study was underway the number of POs (and hysterectomies) appeared to have fallen considerably. This was thought to be the result of a change in clinical “fashion” or, to put it in more dignified terms, a general shift in clinical policies and attitudes in favour of preservation and conservation. Whatever the reason for the change, one thing is clear: it had certainly not resulted from a change in DT, the way in which the decisions were made. It was clear from the start that there was a fundamental clash between the DT being used as the basis of decision support for the women and the DT employed by the consultants.

The consultants used what is adequately summarised as “traditional clinical judgement” based on long training and extensive experience and on “keeping up to date”; there was no obvious sign of any adherence to, or endorsement of, the formal requirements of evidence based practice. However, we did not attempt to dig deeply into this DT—many before us have tried to articulate it with comparatively little success—and we did not need to do so in order to establish that it was nothing like ours.

Our DT consisted of decision analytical modelling of the decision to undergo PO in a clinical guidance programme (CGP). To clarify one point, this is called a DT because, while it was formally used as a decision support or aid for patient and doctor, there is nothing in the technology which says it cannot or should not be used as a decision maker. It is humans (including judges) who decide that it should not be so used, reflecting the current implicit belief that, in medicine, any qualified human (however inadequate) can outperform any decision analytical model (however good).

Subject to the relevant patient information being gathered and input, our CGP provides patient risk factor adjusted assessments of the consequences of having or not having a PO in conjunction with varying durations of HRT (after oophorectomy where the operation is performed or after natural menopause where it is not). The assessments are based on Markov cycle trees constructed for the four major types of disease consequence: ovarian cancer, breast cancer, osteoporosis induced fractures, and coronary heart disease. These cycles accumulate both raw life years and health related quality of life adjusted life years (QALYs), the latter based on the administration of the generic EQ5D instrument to the patient. Patients can default to population based survey results if they wish either to avoid this task or are happy to have themselves treated as “average” in relation to health state preferences. The results can be risk adjusted and time discounted at any chosen rates. Sensitivity and threshold analyses are available on all parameters, either directly or by re-running the programme. The modelling allows costs to be input and patient specific cost effectiveness assessments arrived at, although the latter were not produced in the programme which was piloted clinically.

Whether ours is a particularly good CGP is not the issue here. The consultants' reactions to it, whether as an idea or as a demonstrated product, were interesting. There was never any overt attempt to cast doubt on its conceptual structuring or disease modelling or on the quality or relevance of the vast database of evidence on options and conditions that it contained and processed to a patient specific recommendation. We would have been delighted if there had been such criticism or comment because one of our aims was to design a programme into which it would take only a few moments to substitute better data if any were offered. Indeed, we welcomed the input even of “clinical hunches” about various parameters to see what difference they would make, but none was forthcoming.

Their comments had essentially two themes: (1) this is not how we make decisions, and (2) this system would not fit into our clinical practice and in that sense is not “acceptable”. We have no reason to disagree with either of these descriptive propositions; indeed, we have every reason to agree with both. However, we are understandably keen to point out that, from the point of view of evaluating the CGP as a way of improving the quality of care and maximising patient welfare, both responses are completely irrelevant.

We piloted the system ourselves with marginal contact with or impact on the normal progression of patients through the hospital systems concerned. We have no complaints about the cooperation we received in doing this, but now have serious doubts about whether alternative DTs can ever be properly evaluated within an operating healthcare system. The wider issues therefore remain: how can the potential benefits for patients from changes in medical DTs be fairly assessed and how can those changes be implemented if they are shown to be beneficial to patients?

Can there be any serious doubt that the complex modelling and systematic incorporation of evidence of CGPs, relating to both the individual patient's clinical condition and their personal preferences over health states, give it a priori claims to be a superior DT?2–4 Is there any serious doubt that evaluation of these claims cannot properly be undertaken if clinical judgement is taken as the gold standard DT and departure from it, by definition, is penalised? It seems obvious that comparative evaluation of DTs should be carried out on a level playing field which does not privilege one of them, effectively making it judge and jury in its own case.5

None of the clinicians showed any desire to engage in a serious critique of the modelling or data of the DT, quite understandably given their acknowledged competence and qualifications in their own DT, and quite wisely given that there is no way any individual clinician (or other human being) could hold what the model is holding and do what the model is doing in his or her head.

Sustaining the dominant DT

From this perspective and looking back, clinical guidelines and care pathways are likely to be seen as attempts to prop up and sustain a waning DT—traditional clinical judgement— in the face of its increasingly identified “pathologies” and contributions to suboptimal care. Some of the latter were simply the result of “poor or bad apples”, but many were undoubtedly the product of the increasing demands placed on professionals by exponentially increasing amounts of knowledge and technical possibilities against a background of limited resources. And that is precisely the point. What was needed was not the stress increasing requirements of multiple textual guidelines and care pathways and injunctions to evidence based practice, but a fundamental change in DT which would bring the clinical task within normal human capabilities. At the moment CGPs, which enable the individual practitioner to treat every patient as an individual but ensure that the patient receives the best evidence based care in keeping with their preferences, seem a step too far. However, the major hurdles have been overcome and it is only the resources and the will that is holding back the revolution.

Where are the validated instantiations of this new DT? This is the classic response from defenders of any dominant technology who are able, through their positioning in funding bodies and peer reviewing for key journals, to inhibit the development of competitors. Much excellent work has already been reported in the journal Medical Decision Making which is not taken by many medical libraries in this country. The most advanced work is that on “distributed dynamic decision making” by Gillian Sanders and colleagues at Stanford Medical School.6 They have developed alchemist, which transforms decision analyses into algorithms for easier clinical use, and portal, an on-line system through which clinicians can run decision analyses located on the server with their own data. Numerous other examples of decision analysis based systems are available at the TreeAge website (www.treeage.com).

It is important to note that we are quite explicitly not talking about knowledge based “expert systems”, the quasi-alternative DTs which have received much attention and funding in the last two decades but have yielded comparatively little. This approach failed for two main reasons: (1) capturing the “knowledge” embodied in expert clinical judgement proved virtually impossible, even using the best “knowledge engineers”, and (2) there was no conceptual basis on which the technology could deliver assessments that reflected the individual patient's preferences as well as their clinical state. Essentially, “expert systems” were doomed by the fact that they were trying to mimic clinical judgement rather than replace it.

It is quite normal for those with heavy investments in a particular technology to view the incoming technology with distaste and hostility. It is natural for them to develop all sorts of ways of dismissing the threat by casting doubt on its merits, or its practicality, or its cost, or its ethicality. If the incumbents can manage to get “acceptability” to those using the current technology written into evaluation instruments, this is obviously a killer criterion. All these tactics work better in a context where the consumer cannot easily see or assess the product differences. Those with investments in stagecoaches could complain about the negative effects of railways as much as they liked and they had some minor victories because of their political clout, but the result was never in doubt because the customer could see and experience the differences. Where the customer knows little about the relationship between input and outcome (and doesn't particularly want to think about the outcomes), the room for delay is great. So far doctors have managed to argue that it is unethical to trial alternative decision technologies against clinical judgement in the rigorous way they demand for all other technologies. (In fact, they do not demand this with all procedures, which is why so many technologies such as organ transplantation have never been trialled under the conditions doctors demand for drugs and other procedures, or been trialled at all.) Backed by the legal profession which is similarly locked into a DT of its own devising, the possibility of continuing loss of welfare by patients from suboptimal decision making is great.

As in the rail industry and social work, it has therefore required major accidents or scandals (like the Bristol paediatric heart surgery case) to result in anything but a cursory look at the decision making processes of clinicians and the institutions in which they work. Even then, the diagnosis is fudged in the resulting enquiry because these are not truly organisational problems which can be solved by “better communication”, “better multi-agency cooperation”, “better error reporting”, introducing “whistle blowing systems”, or any of the other diversionary offerings from those who cannot accept that it is the quality of the reasoning, judgement, and decision making of human beings in general that is at the root of the problems. These problems will only be addressed successfully and permanently when it is accepted that decision making is an activity in relation to which relevant analytical skills exist; that these analytical skills need to be recognised, taught, and monitored equally as thoroughly as any other skills; and that supportive programmes and systems are essential to enable people to deliver the full benefits that can flow from more analysis based decision making.

Requirements for changing DT

There are few if any instances in human history of significant and sustained progress occurring as a result of people being made better at doing something. Progress occurs as people are relieved of tasks that can be performed better by machines or in combination with machines, where “machine” is interpreted very broadly. Doctors have welcomed almost all technological advances which enable them to do a better job for patients and overcome the natural limitations of their various faculties and senses. In medicine, stethoscopes to aid hearing, radiographs and scans to aid seeing, surgical tools to aid precise manipulation have all been speedily accepted by the majority of the profession, even if an old guard has usually held out against them briefly. Indeed, there are continuous and strident demands for increased funding of all sorts of technological advances—with the notable exception of those that focus on judgement and decision making.

Why is there no demand for progress in DT? Decision making is unique because it is intimately connected with the self-esteem and ego of the professional person and, in this respect, professionals are little different from any of us. The interesting question for the detached observer is for how long and at what cost in monetary and non-monetary terms will humans reserve their right to do things “my/our way” as far as decision making is concerned? The case for endorsing human irrationality as a higher rationality has been made by many over the centuries, none more effectively than Fyodor Dostoevsky in Notes from Underground (1864) where he suggests that:

“ …twice two is four is not life, gentlemen, but the beginning of death . . . I agree that two and two make four is an excellent thing, but to give everything its due, two and two make five is also a very fine thing.” (Penguin edition, 1972, 40–41)

In 2001 we are in a very different context from his. On the one hand we have private persons deciding what to do at their own expense and, on the other, we have professionals deciding (in collaboration with patients) what do to for patients at public expense. One may happily go along with Dostoevsky and defend irrationality and suboptimality, whatever its source, in the former case, while still seeing it as an ethical duty to seek to eliminate or reduce it, and the injustices it produces, in the latter.

What would proactive adaptation to the coming DT involve? Pessimistically, it may be a waste of time asking the question since there is little historical evidence that those with heavy investments in a technology have ever been able to act quickly enough in changing their training and socialisation to prevent the new technology bypassing them. But what if the medical profession and their associated healthcare system decision makers did set out to try to change proactively? What would be needed and what would be the upside of success? They would need to accept the following three factors:

  • that decision making, not problem solving or any other activity, is the central activity in clinical medicine around which everything else needs to be organised;

  • that the principles of good decision making can be explicitly articulated, formally and transparently addressed, and be made the subject of training and education;

  • that the human practitioner, unaided in information processing and decision making, has no chance of outperforming one aided by a well developed decision support system in respect of any of the several tasks that must be undertaken when one is seeking to deliver evidence based and preference driven care (and cost effective care in the context of a resource constrained publicly funded healthcare system); these tasks include option framing, scenario modelling, probability assessment, value elicitation, and integration of all these by some maximising principle.

Medical curricula have been changing as a result of the manifest deficiencies of the training previously received by doctors and exposed when they were confronted by the realities of clinical practice. The most radical of these responses has been the development of whole medical schools based on the so-called problem based approach. “Real world problems” are here used as the motivation for acquiring the scientific knowledge required to arrive at appropriate diagnostic and management strategies. It is important not to diminish the impact of these changes in relation to the previous (and still widespread) medical curriculum with its almost total emphasis on the unmotivated acquisition of scientific knowledge. Viewed from the perspective of what is needed, however, the problem based approach represents a limited change and one that is probably best interpreted as an attempt, like formal evidence based medicine, to extend the life of the waning DT of clinical judgement.

Upside of the new DT

There will be great benefits to all from the new DT. The future is actually rosy for autonomous professionals, once they have recognised and accepted that this autonomy needs to be exercised in the context of being aided and supported in their clinical decision making by systems that relieve them of the unbearable burdens of knowledge accumulation, updating, and processing that are currently grinding them down. It will also relieve them of much of the legal responsibility that goes with this independence. It is usual to attribute the low morale and increased stress related health problems of professionals in recent times to organisational factors such as bureaucratic requirements, time pressure, etc. But the fundamental truth is that they are in a position similar to the handloom weavers trying to raise their productivity rate to that of the new machines. It is a hopeless and pointless quest and the health of the professional is one of the things most at risk from its futile pursuit.

The future autonomy of the doctor will therefore combine that of the nurse/counsellor/friend—and who is to deny the importance of these roles in health gain and beyond—and that of the airline pilot flying a highly sophisticated piece of machinery. Of course it will then be asked, rhetorically, why such large amounts of money are spent on training doctors for so long in acquiring and updating knowledge, most of which they do not need. The released funds will be much more effectively diverted into providing the decision support/making systems that make it easy for all doctors to be excellent doctors in every consultation, subject to their being adequately trained in operating the system for the benefit of each individual patient.

Conclusion

Professionals who have been brought up, though never trained, to make independent decisions and who are now expected to work as members of a team and follow the agreed guidelines and care pathways of their “system” experience severe role conflict and psychological dissonance. These cannot be resolved by trying to change their attitudes or by different training or by extra CPE. Monetary incentives may change behaviour in the short term but it is a very expensive method, short term in its impact, and an ethically unjustifiable way of spending public money. The role conflict and dissonance can only be finally resolved by changes in DT which transform the operation of the whole system, restoring the ability of the individual professional to deal independently with each of his or her patients by use of CGPs. The values and preferences of each patient can be elicited and entered and combined with the best patient specific evidence available at that moment in time and any cost effectiveness constraints applying in the public system.7 Every patient is special and there is no need to develop complex verbal guidelines that must be consulted, interpreted, and tailored by the practitioner. With CGPs individual practitioners can “fly solo”, albeit with a plane that has relieved them of many of their past tasks and allows them to concentrate on the limited range of tasks connected with operating the machine and, above all, on the essentially human task of establishing a therapeutic interaction.

Leaders are needed who will campaign for proactive adaptation to the waxing technology, refocusing medical education and training on decision making and the use of decision analysis based support systems, not those whose goal is to delay the demise of the waning one.

References

View Abstract

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.