Article Text

Download PDFPDF

Guidelines, judgement, opinion, and clinical experience
  1. B Hurwitz, Head
  1. Department of Primary Health Care and General Practice, Centre for Primary Care and Social Medicine, Faculty of Medicine, Charing Cross Campus, London W6 8RP, UK b.hurwitz{at}

    Statistics from

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Over the past decade it has become a commonplace—almost a definitional truism—to subscribe to the Institute of Medicine's view that clinical guidelines are (or should be) “systematically developed statements which assist practitioner and patient decisions about appropriate health care”.1 Guidelines now subserve other functions too: they provide up to date overviews of research evidence, its strengths, weaknesses and scope of application; summarise research findings in a manner which allows derivation of performance indicators and review criteria; and are used to develop pathways of care, reminder prompts, and to help set healthcare priorities. Indeed, one influential researcher notes that, although guidelines may once have been intended “to be aids to decision making by patients and practitioners . . . we do not use them in this way. Instead, they are used to modify the clinical behaviours of practitioners and reduce inappropriate variations in care”.2

    It is conventional wisdom that the development and application of guidelines are especially appropriate in situations where clinicians are uncertain what—if anything—is the most effective way of treating a particular clinical problem and where there exists reliable scientific evidence which, properly interpreted, can offer a sound basis for developing guidance. Reliable interpretation of scientific evidence is dependent on the adoption of formal methods to inform guideline development and encompasses:

    • an explicit approach to identifying areas of practice where guidelines could prove helpful;

    • convening competent guideline development groups;

    • retrieval, assessment, and synthesis of all relevant evidence to the clinical area addressed;

    • translation of evidence into clinical recommendations;

    • external review of guideline recommendations.3–6

    Since guidelines offer explicit recommendations with the definite intent of influencing what clinicians do, their clinical recommendations make claims which range beyond those which can be derived logically from the results of meta-analyses or randomised trials. The clinical scope of level I evidence is generally too narrow to allow clinically useful guidelines to be created from these sources alone, so recommendations require moorings to other evidential findings and information, including expert and consensus opinion. Guideline formulation thereby steps beyond the results of particular studies and beyond re-presentations of published systematic evidence to incorporate processes of judicious extrapolation, interpretation, and value judgement.7

    The paper by Rycroft-Malone8 in this issue of Quality in Health Care illustrates how guideline developers can bring rigorous techniques to bear in tackling such tasks. In the context of an evidence-linked guideline development process, she describes the formal means adopted by the Royal College of Nursing Institute's Quality Improvement Programme to develop a national guideline on assessment of risk and prevention of pressure ulcers. Ulcer risk assessment is a complex clinical area in which explicit evidence relating to a wide range of problems and techniques has been summarised.9–11 From these summaries, 200 statements were derived and rated on a “disagree/agree” scale of 1–9 by 10 members of a panel composed of participants who reflected the range of people to whom the guideline would apply. The panel was sent summaries of the research evidence and was asked to rate each recommendation statement, taking account of the evidence, their own expertise, and the opinions and realities of healthcare provision in the UK. The results of this exercise were fed back to panel members by the guideline developers, and the panel considered again each statement with particular focus on those that had caused most disagreement. The threshold score for incorporation of each recommendation into the guideline was set at a median score of 7 or above, and an indication of the degree of agreement dispersion across the median score was included. A total of 160 recommendations were thereby adopted in the final guideline, which comprises a mixture of research based and consensus based recommendations. One wonders how many more would have been removed from the guideline had the median score been set at 7.5 or 8.5, or if a qualifying narrow interquartile range had been set to guarantee a minimum level of agreement.

    The transparent approach of the Royal College of Nursing Institute to the development of a national guideline on assessment of risk and prevention of pressure ulcers goes some way towards reassuring those who for some time have warned of the dangers of treating guidelines as pronouncements which carry oracular authority. Ten years ago, for example, Tong wrote: “Medical practitioners should regard the recommendations of consensus development conferences as useful reference tools: not the rulings of philosopher kings, but the attempt of thoughtful people to share their knowledge—albeit imperfect—with other people”.12

    Formal techniques for appraising the results and relevance of scientific studies and of systematic reviews are now relatively well established in the context of guideline development.13 The report by Rycroft-Malone offers an approach which also brings rigour and stringency to the equally important task of assaying diverse sources of judgement, expert opinion, and clinical experience in their construction.



    • See article on page238

    Linked Articles