Statistics from Altmetric.com
Description without prescription is like diagnosis without treatment
Why bother trying to discover how clinicians make decisions? Would it really make any difference to the quality of care if we knew more about their decision making processes? Is there any basis for the conventional assumption that it would make a significant difference, and in the right direction?
Probably not. During the three decades since the pioneering work of Elstein et al,1 numerous studies of the decision making behaviour of clinicians—and, indeed, professionals in many fields—have yielded only one relevant finding. Insofar as we can make inferences about how they decide what to do by observing their behaviour or interrogating them, their decisions and decision processes vary enormously. Even when researchers are able to come up with generalisations about the diagnostic or therapeutic processes of practitioners, these are often weakly supported and/or highly restricted in their coverage. Above all, they are analytically vague. This is not surprising because, even though some explicitly analytical reasoning is usually reported by practitioners, the expertise applied in professional decision making appears to be substantially intuitive, involving significant amounts of either intuitive pattern recognition or intuitive regression across “multiple fallible indicators”.2 The disappointing results from the vast amounts of money and effort put into developing “expert systems” of the production rule (“if-then”) sort have merely confirmed that much of the time experts literally do not know what they are doing. This does not, of course, imply that what they are doing is not appropriate and may indeed be optimal. What it does mean is that even skilled “knowledge engineers” cannot extract the inaccessible elements of expertise for use in either practice guidelines or professional training.
Given the undoubted existence and significance of intuitive expertise, what is the point of attempting to describe the decisional behaviour of doctors? Setting aside the aim of acquiring knowledge for its own sake, which justifies the interest of the academic psychologist, does descriptive theorising and empirical research without an explicit prescriptive standard have any practical use for either practitioner decision making or professional education? Why spend any time on descriptive theorising unless one knows what is the best decision or best decision process, or both? Without a prescriptive basis, the use of descriptive results in improving the quality of care is zero and this is true whether the adopted prescriptive basis is decision analysis, the practice of some person or some group defined as best decision practice, or any other criterion.
It is, of course, methodologically imperative that the prescriptive basis be defined before any research study. Otherwise one will simply be defining the prescriptive as what happens: this is the way doctors do make decisions, therefore this is the way they should make decisions. Alternatively, one will end up simply pointing out the existence of variation, in itself of no practical use except insofar as it acts as a stimulus to identifying the necessary prescriptive basis.3
If one does have an accepted prescriptive basis for quality care, why not just apply it and teach it to the extent either is possible? Forget the descriptive challenge except as an aid in determining the most effective way to identify the obstacles to implementing the prescriptive.
But there is a major difficulty lurking here—one that only an explicitly analytical prescriptive standard, such as that offered by decision analysis, satisfactorily exposes. Many studies of practitioner decision making which seek to evaluate the quality of decisions (either explicitly or implicitly) fail to recognise, or sufficiently emphasise, two things. Firstly, that there can be no such things as a gold standard verdict on management decisions of the sort that is possible on diagnostic judgements. Decisions involve value judgements as well as probability judgements and the prescriptive bases of the two types are very different, if indeed one exists for value judgements. Secondly, that any evaluation of a decision by a prescriptive standard must logically be on an ex ante basis. One cannot sensibly evaluate a decision by its ex post outcome, as is often suggested.
One can certainly set up a prescriptive standard against which to evaluate an ex ante probability judgement offered as to whether this patient has appendicitis or this child has been abused. But unless one can also set up a gold standard on the value side of the decision, which will involve establishing the relative value/disutility to be assigned (ex ante) to the false positive and false negative errors always possible under irreducible uncertainty, one cannot evaluate the decision. In order to evaluate the decision one must be able to identify what the best one was in this particular case, and this necessitates identifying the best available probabilities and most appropriate value judgements—in both cases at the moment of decision. Evaluation of decisions is therefore contingent on agreement on the values and preferences regarded as the appropriate ones at that moment, Ethically, these should be those of the owner(s) of the decision—the patient in the private clinical situation or several constituencies in the public health and health services. If there is insufficient agreement on these—and some variation in values may be consistent with the same choice of action—no agreed evaluation of the quality of a decision will be possible.
Why the ex ante basis? Under uncertainty it is possible that the best decision will produce the worst outcome and vice versa. One can obviously establish, by an ex post gold standard procedure, whether this patient actually had appendicitis or whether this child had actually been abused. (The latter example illustrates the difficulty of establishing 24 carat gold standard verdicts or, in many cases, ones of very few carats.) But while the judgement/ex post outcome observation in this case can be added to the database for future decisions—improving the assessments of the sensitivity and specificity of the professional concerned—it cannot, by definition, change the evidence that was available at the time the original decision was made. It is therefore irrelevant to the evaluation of that decision. (The existence of a treatment effect, as in the ventilation case investigated by Kostopoulou and Wildman,3 is a serious problem for the development of the evidential database.) Equally irrelevant is the experienced utility or disutility of the actual outcome, as opposed to the anticipated utility or disutility of the possible outcomes at the moment of the decision.
Description without prescription is as useful as diagnosis without treatment.
Description without prescription is like diagnosis without treatment
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.