Article Text


Influence of evidence-based guidance on health policy and clinical practice in England
  1. P Coleman, research associate,
  2. J Nicholl, director
  1. Medical Care Research Unit, Sheffield School for Health and Related Research, Regent Court, Sheffield S1 4DA, UK
  1. Ms P Coleman P.Coleman{at}


Objectives—To examine the influence of evidence-based guidance on health care decisions, a study of the use of seven different sources and types of evidence-based guidance was carried out in senior health professionals in England with responsibilities either for directing and purchasing health care based in the health authorities, or providing clinical care to patients in trust hospitals or in primary care.

Design—Postal survey.

Setting—Three health settings: 46 health authorities, 162 acute and/or community trust hospitals, and 96 primary care groups in England.

Sample—566 subjects (46 directors of public health, 49 directors of purchasing, 375 clinical directors/consultants in hospitals, and 96 lead general practitioners).

Main outcome measures—Knowledge of selected evidence-based guidance, previous use ever, beliefs in quality, usefulness, and perceived influence on practice.

Results—A usable response rate of 73% (407/560) was achieved; 82% (334/407) of respondents had consulted at least one source of evidence-based guidance ever in the past. Professionals in the health authorities were much more likely to be aware of the evidence-based guidance and had consulted more sources (mean number of different guidelines consulted 4.3) than either the hospital consultants (mean 1.9) or GPs in primary care (mean 1.8). There was little variation in the belief that the evidence-based guidance was of “good quality”, but respondents from the health authorities (87%) were significantly more likely than either hospital consultants (52%) or GPs (57%) to perceive that any of the specified evidence-based guidance had influenced a change of practice. Across all settings, the least used route to accessing evidence-based guidance was the Internet. For several sources an effect was observed between use ever, the health region where the health professional worked, and the region where the guidance was produced or published. This was evident for some national sources as well as in those initiatives produced locally with predominantly local distribution networks.

Conclusions—The evidence-based guidance specified was significantly more likely to be seen to have contributed to the decisions of public health specialists and commissioners than those of consultants in hospitals or of GPs in a primary care setting. Appropriate information support and dissemination systems that increase awareness, access, and use of evidence-based guidance at the clinical interface should be developed.

Statistics from

The key challenge for the evidence-based movement in all countries pursuing quality improvements in the standards of patient care is to close the gap between what is known, on the one hand, and what happens in clinical practice on the other.1 Throughout the 1990s a large volume of important primary and secondary resources, such as the international Cochrane Collaboration2,3 and national and regional Health Technology Assessment (HTA) research programmes,4 have developed with many others to provide the infrastructure necessary to get evidence into practice. Recent developments in England have included a high profile policy agenda committed to fostering a climate in the NHS5,6 wherein managers and clinicians examine their beliefs and practice critically against the best research evidence available.

    Key messages

  • Substantial variation in the knowledge, use, and perceived influence of published sources of evidence-based guidance exist between health professionals working in different health settings.

  • Senior health professionals are not proactive in seeking out evidence-based guidance on the Internet and most rely on it being disseminated to them by post.

  • Local factors other than dissemination policies may influence use.

  • Information systems that support the use of evidence-based guidance for public health policy, and commissioning decisions should be developed at the clinical interface.

The extent to which any published source of guidance in isolation is likely to affect policy and clinical practice is limited,1 but this is not to say that guidance is not “valued” by the health professionals for whom it is provided, or that it does not contribute in some way to shaping attitudes or influencing behaviour. The effectiveness of any guidance depends on many factors: health professionals have to know that the guidance exists; the output has to be easy to access both in terms of availability and readability; and the content has to be relevant. There must be agreement with how the guidance is generated and how the evidence is interpreted. All these factors are necessary (but not sufficient) conditions to be met before the evidence may be acted upon, and the complex process may be disrupted at any point by other changes and circumstances. Little is known about the organisation of evidence-based guidance and the processes involved in its dissemination from the perspective of health professionals, yet each point where their knowledge or views of the guidance diverge from those producing it is a potential barrier to implementing the evidence.

In February 1999 we were commissioned to evaluate a regional system of rapid review of the evidence, appraisal, and recommendation by a peered committee based in the South & West (S&W) health region of England known as the S&W Development and Evaluation Committee (DEC).7 We adapted the design of one of the studies in the evaluation (box 1) to capture a much wider picture of the patterns of use of several different evidence-based guidance, thus allowing the results for the S&W region to be included both in a national survey and also to be dealt with separately for purposes of the evaluation. To identify measures that might improve the potential of the guidance to influence healthcare practice positively we have studied the patterns of knowledge and issues around the use of evidence-based guidance in senior health professionals.

“Topic driven” case studies of six DEC reports:

    Specified topics:

  • Cervical screening intervals

  • Insertion of grommets

  • Antenatal checks

  • Breast reconstruction following mastectomy

  • Dilatation and curettage

  • Triple therapy for Helicobacter pylori.

    Component studies:

  1. A postal survey of use generally in senior health professionals.

  2. A postal survey of use of specific topics (subsample).

  3. Follow up telephone interviews (subsample).

  4. “Before” and “after” study of routine data and follow up in three zones in England (South & West, NHS North & West region, and other England.)

  5. Costs study.


A postal questionnaire that had previously been piloted was sent to a sample of 566 directors of public health and directors of commissioning/purchasing responsible for directing local health policy and commissioning services in the health authorities, consultants (who were also clinical directors) providing specialist care in trust hospitals, and lead GPs in primary care groups (PCGs) providing care to patients in primary care. The sample were all in senior posts in the NHS and were selected on the assumption that their perceptions and spheres of influence might reasonably be expected to be key indicators of the wider impact of evidence-based guidance.


The sample was drawn from all eight health regions in England stratified into three zones as follows: (1) all hospitals and health authorities in the S&W region, (2) all hospitals and health authorities in the NW region, and (3) all hospitals and health authorities in three health districts selected randomly in each of the other six English health regions, grouped together as “other England”. The sample of 12 lead GPs from each of the eight English health regions was selected randomly from the available PCG information.

The clinical specialties included (general surgery, plastic surgery, obstetrics and gynaecology, women and child health, paediatrics, ear nose & throat, gastroenterology, and oncology) in acute hospitals were those that might be influenced specifically by the six reports selected for the evaluation (box 1).

The final sample consisted of 95 directors of public health and directors of commissioning/purchasing responsible for policy and public health locally in 46 health authorities (a sample of 46% of all authorities in England); 375 clinical directors/consultants in 162 hospitals (representing 41% of all acute and/or community trusts but excluding ambulance trusts), and the lead GPs in 96 PCGs identified from a communication from the Department of Health detailing contact details and information available at the time of the survey (n=362), yielding a 27% sample of PCGs.

The questionnaire and up to two follow up reminders were sent with a letter addressed to each person in the sample by name and job title, identified in the case of the health authorities and the hospitals from a health services directory8 and for the GPs as described above.


After panel discussions with local information specialists and two health economists, seven sources of evidence-based guidance were selected to represent international, national, and local sources and to typify the different types of evidence-based guidance available in England at the time of the study (table 1).

Table 1

Summary of selected guidance


Against each source of evidence-based guidance the sample was asked about awareness and use ever, and how the information was accessed (appendix 1). The systematic reviews of the Cochrane Collaboration are available only electronically, but at the time of the survey all the other sources were available either electronically or in printed format. The sample was asked to indicate all the methods usually used to access that particular source of evidence-based guidance. The options were the Internet, special request through a library (reflecting proactive ways of accessing information), direct mail, circulated within organisation (typical of passive routes to information), and an “other” category.

To develop a proxy measure of what the value of each source was to our sample, we included three statements about “quality”, “usefulness” as a practical decision making tool, and perceived “influence on practice”. The increasing intensity in the three statements was adapted purposively from a communication model9 and moved from “beliefs means action” to capture the perceived impact of each source for the participants who were asked to indicate their agreement with each statement on a 5 point Likert scale ranging from “agree strongly” to “disagree strongly”.

There was a “free text” section for additional comments and a box to indicate the participant's willingness to take part in a follow up interview.


Responses were processed in an Access database and analysed using spss for Windows using χ2 tests. Statistical significance was set at p<0.05 and 95% confidence intervals (95% CI) were calculated for key estimates.



Responses were received from 414 of the 566 in the original sample. Six forms were returned by the post office as undelivered. There were six refusals and one response was by letter rather than questionnaire. Adjusting for non-receipt and refusals, a response rate of 73% (407/560) was achieved. The response by health setting was 79% (n=75/95), 73% (n=270/370), and 65% (n=62/95) in the health authorities, hospitals, and primary care, respectively. No differences were observed between the proportions of responses received and the sample frame by health region or health setting (health authorities/hospitals/GP).


Of the 407 respondents, 82% (n=334) had previously consulted at least one source of evidence-based guidance. In total, 1037 contacts with different evidence-based guidelines were reported, 973 with specified sources and a further 64 with “other” (predominantly Royal College guidelines). Seven of the 334 had consulted “other” sources exclusively. Differences in the proportions of respondents who reported no use of any source ever were observed between the three settings (three health authorities (4%), 55 hospitals (20%), and 15 GPs (24%)). Variations in the patterns of use of evidence-based guidance were found between hospital consultants and GPs, and between directors of public health and directors of commissioning/purchasing in the health authorities. The source used most often by respondents from health authorities was the Effective Health Care Bulletins produced in York, while the Cochrane Collaboration was used most often by respondents in hospitals and Bandolier by GPs (fig 1A). Substantial differences between awareness and use were observed between respondents across the three health settings (fig 1B and C). The total number of respondents divided between “use ever” and “awareness” by the individual sources of evidence-based guidance (national only) are shown in table 2.

Table 2

Use and awareness of selected guidance by numbers of respondents

Figure 1

(A) Use ever of evidence-based guidance by health setting. (B) Awareness of evidence-based guidance but not used ever by health setting (national sources only). (C) Lack of awareness of evidence-based guidance by health setting (national sources only). HA = health authority; EHCB = Effective Health Care Bulletins; HTA = NHS Technology Assessment Programme Reports; Effect Matters = Effectiveness Matters.


Four respondent users of evidence-based guidance did not complete this question. Of those reporting past use of any evidence-based guidance, 84% (277/330) usually used one method only to access the information although the method varied between different guidance. The method used most frequently was “direct mailing”, which was reported by 57% of user respondents (190/334, 46% of all respondents) and accounted for 41% of all types of contact (457/1070); 29% (97/334) of users had accessed at least one source of evidence-based guidance by the Internet but, overall, the Internet represented only 12.6% of all types of contact and was similar to the proportion of specific requests for an item—for example, through a library (12.1%). No difference was observed in the use of the Internet to access any source of evidence-based guidance across the three health settings.


There was little difference between the three health settings in the proportion of user respondents who either “agreed strongly” or “agreed” with the statement “I think this is a source of good quality evidence-based guidance” (fig 2A). Proportional differences in those who “agreed” or “agreed strongly” with the statement “ . . . this source of evidence-based guidance is useful in the decisions I have to make” were observed for two of the four sources of evidence-based guidance with sufficient numbers of users from each health setting for comparisons to be made (fig 2B).

Figure 2

Proportions of users of evidence-based guidance who “agreed” or “agreed strongly” with the statements (A) “... this is a source of good quality evidence-based guidance”, (B) “this source of evidence-based guidance is useful in the decisions I have to make”, and (C) “... this source of evidence-based guidance contributed to changing my clinical/purchasing practice” by source of guidance and health setting. HA = health authority; EHCB = Effective Health Care Bulletins; HTA = NHS Technology Assessment Programme Reports; Effect Matters = Effectiveness Matters. *Less than five user respondents in hospitals and/or general practice (not shown).

In users of any evidence-based guidance a clinical/health policy split emerged in the proportion who “agreed” or “agreed strongly” with the statement that “ . . .this evidence-based guidance has contributed to changing my clinical/purchasing practice” (65 of 75 health authorities (87%), 140 of 270 hospitals (58%), and 35 of 62 GPs (57%)). Differences in the levels of agreement with this statement between the three health settings were observed for all the evidence-based guidance specified (fig 2C).


A positive association was seen in the proportions of professionals using a source of evidence-based guidance between the region in which the professional was based and that in which the guidance was published and/or produced (table 3). This reached statistical significance for the two regional DEC initiatives in the S&W and Trent regions whose reports were disseminated routinely and locally, and also for the NHS Technology Assessment Programme Reports, Effectiveness Matters, and Bandolier which had different distribution practices.

Table 3

Reported use (%) by professionals in health region where evidence-based guidance is produced/published (“local”) and use by all others


Additional comments were received from 22% of respondents. The qualitative analysis of the texts is available elsewhere.7 Clear differences emerged between professionals in the clinical and non-clinical settings. Directors of public health and directors of commissioning/purchasing were more positive about the value of specific sources and evidence-based guidance generally. The clinicians were more reserved, perceiving a lack of evidence in some clinical specialties (particularly ear nose and throat, palliative care, and mental health); bias in both the selection of the original papers included in some of the reviews and in how their results were interpreted; failure to address the clinically relevant questions; and issues of confidence in applying population-based results to individual patients in a clinical setting.



Our survey yielded a comprehensive picture of the knowledge, use, and perceived impact of several different sources of evidence-based guidance available in England in senior clinicians and policy makers based in three different health settings. The results indicate that published sources of evidence-based guidance are used, but there are clear differences in knowledge, use, and perceived influence of different sources of guidance between the professionals based in the three health settings. Those responsible for health policy and commissioning in the health authorities are much more likely to believe that evidence-based guidance has influenced their practice than doctors who provide clinical care in hospitals or primary care. Our study indicated that senior health professionals in any setting were not particularly proactive in seeking out information of this type on the Internet and relied on it being disseminated to them by post. A local effect was observed between the health region in which the professional is based and the region where the guidance is produced and/or published. Not unexpectedly, this was evident for the two regional DEC initiatives, but it was true also for some of the national sources (table 3).


The finding that positive beliefs about the quality of guidance do not necessarily translate into changing practice in a clinical setting has been reported previously in a Canadian study of hospital doctors10 and an Australian study of GPs.11 Our finding that 29% used the Internet to access evidence-based guidance confirms similar findings of a relatively low use of the Internet compared with other ways of accessing information reported in 1998 for GPs12 and hospital doctors.13 Our data extend the finding to include senior professionals responsible for public health and commissioning.

The sources of evidence-based guidance included in this study were typical of the different sources and types of publication available in England in 1999 (table 1). The UK Cochrane Centre is in Oxford but the collaboration is, of course, international. We have no reason to believe that the national and regional sources of evidence-based guidance in our selection were uncharacteristic of those developed in other countries to manage the evidence base. With the exception of Cochrane (available only electronically), all the sources in our selection were published both electronically and in printed format and were disseminated by post or on request. Again, we would expect that this is not very different from the way in which evidence-based guidance is organised in other countries. We therefore expect that our findings will be relevant to the international evidence-based movement.


We are unable to say whether the perceived impact of evidence-based guidance in non-respondents was different from that in respondents, but the overall response rate of 73% in a population of this type is high, and the rate of 65% achieved from the GPs compares well with a rate of 67% reported in a previous GP based survey undertaken in England.12 There is some evidence that non-response represents a diminishing relevance of the topic to the non-respondent compared with the respondent,14 and also that self-reported adherence to evidence-based recommendations is overestimated when compared with objective measures.15 As we cannot eliminate either of these sources of potential bias, the perceived impact of evidence-based guidance in professionals in each health setting may be inflated and our results should be interpreted as giving a “best possible scenario”.


Our study shows that awareness, use, and perceived impact of evidence-based guidance is much greater in those responsible for directing or purchasing health policy in the health authorities than consultants in hospitals or GPs in a primary care setting. One explanation, which is also supported by the “additional comments” received in our survey, is that research in populations can help to inform purchasing decisions and policy but is often unhelpful in informing clinical decisions about individual patients.16 Our results also show that different groups of health professionals exhibit distinct preferences for different types of evidence-based guidance. This suggests a need for systems to produce, filter, target, and package the evidence in ways that reflect these preferences. Taking into account dissemination policies (table 1), the positive association found between the use and locality of publication and/or production of the national sources of evidence-based guidance indicates that local factors other than dissemination may influence use. The lack of awareness of important sources of evidence such as the NHS Technology Assessment Programme Reports, which was particularly marked in the GPs in our study (fig 1C), also raises issues about how best to get the evidence to the notice of key providers of health care. The finding that electronic methods were used less commonly than the traditional routes to the published evidence may change over time, but we found no difference in Internet use to access evidence-based guidance by the health professionals across any of the settings. While acknowledging therefore that the complex nature and processes of clinical and non-clinical decision making are very different, our data indicate strongly that information systems such as exist to support the use of evidence-based guidance for public health policy and commissioning decisions should be developed at the clinical interface.


The authors would like to thank Andrew Booth, Alan Brennan, Chris McCabe and Simon Dixon for their help in selecting the sources of evidence-based guidance used in this survey, and Andrew Booth and Alicia O'Cathain for commenting on early drafts of the paper.

The survey was part of a larger evaluation of the reports published by the S&W DEC funded by the NHS Executive S&W. The views expressed in the paper are those of the authors alone and do not necessarily reflect the views of the NHS Executive S&W.


View Abstract


  • Conflict of interest: none.

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.