Article Text

Download PDFPDF

Reactions to the use of evidence-based performance indicators in primary care: a qualitative study
  1. Emma K Wilkinson, research assistant ,
  2. Alastair McColl, lecturer in public health medicine ,
  3. Mark Exworthy, research fellow ,
  4. Paul Roderick, senior lecturer in public health medicine ,
  5. Helen Smith, senior lecturer in primary care ,
  6. Michael Moore, general practitioner ,
  7. John Gabbay, professor and director
  1. Wessex Institute for Health Research & Development, University of Southampton, Southampton General Hospital, Southampton SO16 6YD, UK
  2. LSE Health, London School of Economics and Political Science, London WC2A 2AE, UK
  3. Primary Medical Care, University of Southampton, Southampton SO16 5ST, UK
  4. Three Swans' Surgery, Salisbury ST1 1DX, UK
  1. Dr A McColl, Woolpit Health Centre, Heath Road, Woolpit, Bury St Edmunds, Suffolk IP30 9QU, UK a.mccoll{at}


Objectives—To investigate reactions to the use of evidence-based cardiovascular and stroke performance indicators within one primary care group.

Design—Qualitative analysis of semi-structured interviews.

Setting—Fifteen practices from a primary care group in southern England.

Participants—Fifty two primary health care professionals including 29 general practitioners, 11 practice managers, and 12 practice nurses.

Main outcome measures—Participants' perceptions towards and actions made in response to these indicators. The barriers and facilitators in using these indicators to change practice.

Results—Barriers to the use of the indicators were their data quality and their technical specifications, including definitions of diseases such as heart failure and the threshold for interventions such as blood pressure control. Nevertheless, the indicators were sufficiently credible to prompt most of those in primary care teams to reflect on some aspect of their performance. The most common response was to improve data quality through increased or improved accuracy of recording. There was a lack of a coordinated team approach to decision making. Primary care teams placed little importance on the potential for performance indicators to identify and address inequalities in services between practices. The most common barrier to change was a lack of time and resources to act upon indicators.

Conclusion—For the effective implementation of national performance indicators there are many barriers to overcome at individual, practice, and primary care group levels. Additional training and resources are required for improvements in data quality and collection, further education of all members of primary care teams, and measures to foster organisational development within practices. Unless these barriers are addressed, performance indicators could initially increase apparent variation between practices.

  • performance indicators
  • primary care
  • primary care groups
  • training

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evidence-based performance indicators are increasingly being used in primary care with the intention of improving quality of care.1 It is currently unknown whether these indicators will help bring about changes in clinical practice within the newly formed primary care groups in England (box 1). As considerable effort and resources will be put into assessing performance in primary care as part of clinical governance, it is important to determine reactions to the use of performance indicators. Such an assessment could also provide important lessons for the implementation of the National Service Framework indicators (box 2). There have been few attempts to evaluate reactions to performance indicators23 within the health service and especially within primary care. One study which assessed the usefulness of indicators within secondary care in the USA4 showed that potential users must perceive the indicators as relevant and of sufficient value before they will act upon them. Any positive impact on health and health care depended on whether appropriate action had been taken as a result of using the indicators.4

In 1999 the UK government established primary care groups with the aim of bringing together general practitioners and community nurses in each area to work together to improve the health of local people.1 These replaced multifunds, locality commissioning groups, individual fundholders, and total purchasing projects. The main functions of these groups are to:

  • contribute to health authority's health improvement programmes on health and health care;

  • promote the health of the local population;

  • commission health services for their populations;

  • monitor performance;

  • develop primary care by joint working across practices;

  • better integrate primary and community health services.1

    Primary care groups are accountable to health authorities and “agree targets for improving health, health services and value for money”.1 There are several primary care groups in each district health authority. This new approach offers primary care the opportunity to further combine health and health care needs at the individual and population levels.

The UK Government is developing a series of National Service Frameworks to improve the quality and consistency of services in a number of priority areas. The National Service Framework for Coronary Heart Disease, published in March 2000, sets 12 standards for the prevention, diagnosis, and treatment of heart disease ( There are milestones to mark progress with each standard and long term goals.

We set out to investigate the reactions of primary health care professionals to a set of evidence-based cardiovascular and stroke performance indicators that we had developed previously (table 1).5 Our indicators were very similar to those now included in the National Service Framework for coronary heart disease (box 2). It is unknown how primary care teams will respond to performance indicators and this needs to be qualitatively explored.6 We could identify no other primary research on evaluating performance indicators using qualitative methods. We aimed to identify the range of perceptions towards the indicators and actions stimulated in response to them. A further objective was to identify the range of barriers and facilitating factors in using the indicators to change practice. Investigating the long term impact of the intervention did not fall within the scope of this study.

Table 1

Indicators used in study



We presented data on a set of performance indicators through audit, feedback, and educational material to each of the 18 practices within one primary care group. We had previously assessed the feasibility of deriving the indicators in all 18 practices within this primary care group and identified the problems, constraints, and costs of generating them.7 Considerable variation was found in the use of computers between practices and in the ability and ease of various practice computer systems to generate indicators.7 As half the practices told us they would be unable to collate the data themselves, we collated the data on their behalf. It was possible to derive eight indicators in all practices and in three practices all 26 indicators. Practices varied greatly in their identification of diseases and in their uptake of effective interventions.

At each practice one of the authors (AM) arranged a one hour presentation with the general practitioners, practice nurses, and the practice manager. During the presentation AM explained why we developed these evidence-based indicators, how we derived the indicator values for their practice, and how their indicator values compared with the other practices. We advocated a population approach to health by presenting estimates of the additional number of deaths or events that could be prevented in the primary care group with appropriate identification and full uptake of effective interventions.5 We also presented their variations in identification and uptake of interventions by deprivation scores. We encouraged the practice team to develop an action plan for change and gave them brief summaries of a stepwise, cyclical process of changing practice as proposed by Grol.8 A key part of this was to identify obstacles to change and to link interventions to overcome them.


Fifteen of the 18 practices agreed to participate in this qualitative study. During the presentation one of the authors (EW) took observational notes on reactions to and comments made on the indicators. At the end of the presentation we asked each practice to identify for interview the lead general practitioner on previous audit initiatives. We randomly selected a second general practitioner and asked to interview the practice manager and one of the practice nurses who attended the presentation. EW, a non-medical researcher, conducted the interviews two months after the presentation. The interviews were semi-structured. The interview schedule was piloted in three of the 15 participating practices in the primary care group. The interviews, some of which were observed by members of the research team, took place between November 1998 and May 1999. EW asked all respondents for their reactions to the presentation and whether any changes had occurred as a result of the presentation. If changes had occurred, respondents were asked to describe the types and process of change including facilitating factors or barriers. If no changes occurred, respondents were asked why this was so. All respondents were asked whether and how their practice could generate these indicator data. Doctors and nurses were asked for their views on the indicators and whether they had had any impact on their clinical practice. On average, interviews with clinicians lasted 50 minutes (range 40–85) and with practice managers for 30 minutes (range 20–40). The interviews aimed to identify issues unlikely to be revealed in a questionnaire and to tease out any hidden agendas.


The interviews were audio taped and transcribed verbatim. The transcripts from the first three practices and the observational notes were read independently by EW, AM and ME. We discussed the initial themes and range of responses in order to produce an initial framework for analysis.9 EW systematically applied the framework to all transcripts using Nud*ist software. AM checked the coding of every transcript. EW and AM met regularly to discuss emergent themes. The framework was refined according to new, emergent themes and the modified framework was then re-applied to all transcripts. This iterative process ensured the findings were heavily grounded in the data.10 Participants who attended the presentation were invited to a second presentation of the main findings so they could comment on whether the data analysis and interpretation were authentic representations of their views.11



Fifty two primary health care professionals were interviewed across 15 practices (table 2). Three practice managers did not attend the presentation and two practices did not have a nurse present for the presentation. One practice nurse declined to be interviewed because of illness.

Table 2

Characteristics of respondents interviewed and practices within primary care group (PCG)


Interprofessional differences

Doctors were more outspoken on their views on the indicators than practice nurses and managers. The variation between professional groups in their ability to interpret the indicator results may partially explain this finding. Only four doctors expressed difficulties in understanding the indicator results when asked for their views on the indicators compared with a third of the practice managers and practice nurses. A common view expressed by nurses was that some of the terminology used was difficult to interpret—for example, terms such as “denominator” and “confidence intervals”. Practice managers found some of the clinical concepts difficult to understand.

Views on the intervention

General practitioners' views on our way of reporting back their performance on the comparative, population based indicators fell mainly into three categories: scepticism, enthusiasm, and those with a focus on the novelty value of this approach (box 3). Ten doctors' views were sceptical and their main concern was that data aggregated to a population level could mask individual patient preferences and histories of relevance to everyday decision making. Fewer doctors (n=7) were enthusiastic. They believed that our overall approach could provide data complementary to the individual patient focus of primary care. Four doctors perceived this approach as a radical new way of monitoring quality of care but were neither enthusiasts nor sceptics. The remaining doctors sought clarification of the various aspects of the intervention such as its future objectives. Neither practice nurses nor managers commented in detail on the population perspective. They tended to focus on their level of understanding and their expected involvement, and also sought clarification of various aspects of the study.

Box 3. Examples of views on our overall approach.

Sceptical viewsI think a lot of what we do defies this sort of analysis. You can count the number of people on aspirin but you can't quantify the satisfaction, the lives helped and the patient behaviour modified” (general practitioner 14).

Enthusiastic viewsGeneral practice is a multi-pronged job as you're looking after individuals as well as numbers and quantitative indicators. Half our job is to make sure the patient as a person is fine but it's important to see how you're doing on standard issues such as hypertension and ischaemic heart disease which can be categorised and quantified, so it's (the intervention) a useful part of the audit circle” (general practitioner 24).

Novel approachPersonally, it is a quantum leap for me, just looking at the population overall and seeing the percentage of ischaemic heart disease patients, and whether they're on a certain treatment” (general practitioner 4).

The role of evidence

A common view amongst the general practitioners was that the indicators were evidence-based. No one in any professional group contested the evidence base of the indicators. One doctor said: “They (the indicators) are important as they are things we can do something about and they cover the major areas of secondary coronary heart disease prevention that can impact on morbidity and mortality” (general practitioner 19). However, a common criticism was the lack of precision of the indicators—for example, using only the last recorded blood pressure reading instead of a mean, or not excluding patients for whom drugs such as aspirin were contraindicated. One doctor said: “You need to improve specificity because we have to do things well and we don't want to get penalised because the indicators are too gross” (general practitioner 20). The problems of defining diseases and changing the recommended treatment thresholds for interventions were mentioned in 14 interviews. Most of these comments related to hypertension (n=8) and hypercholesterolaemia (n=6). Six general practitioners highlighted the difficulties of defining heart failure.

Views on addressing inequalities in services between practices

When we presented identification of diseases and uptake of interventions by practice deprivation score we suggested to each practice that the primary care group could focus its interventions on addressing some of these inequalities and target those practices in the more deprived areas that appeared to be coping less well. Despite this, only two respondents (both doctors) mentioned how performance indicators could be used to identify and address inequalities in services.

Indicators as performance management tools

Doctors reflected most on the use of performance indicators as management tools. Neither the nurses nor managers discussed the indicators' potential management function with the exception of two nurses who believed that doctors may feel threatened by indicators that might question their clinical practice. Individually, doctors held mixed views on the use of performance indicators in general practice, although on balance they expressed more concerns than positive views. Common concerns included an increase in workload, reductions in professional autonomy and trust, financial penalties based on performance areas beyond the scope of professional control, and short term expectations of improved quality in care. In comparison, perceived advantages included having the capacity to monitor important areas of care, improving efficiency, and facilitating up to date clinical practice. Examples of views on indicators as performance management tools are shown in box 4.

Perceived advantagesWe need to have clinical governance and that needs indicators that are relevant to health, otherwise we are going to get pushed into doing irrelevant work. I do feel very enthusiastic about it because it will enable us to concentrate on important areas that need measuring” (general practitioner 9).

Many doctors still continue to do what is right and not what's based on evidence, and I have fears about that. Some patients don't get a fair deal because of the lack of uniformity. Although there is much variation between practices in many ways, if you can bring this type of information to them, they're bound to think about how to achieve a consensus of opinion, and that's good practice” (general practitioner 27).

ConcernsIn general practice we are self-employed and have absolute power over our patch like the Bishop of the Church of England. It is difficult to tell a GP principal to pull their socks up as they are liable to get into a huff and get offended and you lose their support” (general practitioner 17).

Perceptions on performance and credibility of the data

Almost all the general practitioners (n=26) and nurses (n=9) and half the practice managers (n=5) questioned the validity of the data which were used as a basis to derive the indicators. The most common reasons for questioning their validity were computer related difficulties, particularly loss or corruption of data when transferring to a new system, gaps in the data due to inconsistent “blitzes” in recording, and wide variation in computer use within practices (box 5). Other reasons included the lack of clear responsibility for data entry work, poor computer skills, lack of computer training, and confusion in applying Read codes. Despite these concerns with data quality, the data were sufficiently credible to prompt most respondents across the professional groups to reflect on their performance as assessed by the indicators. All the professionals found the comparative nature of the results useful in interpreting their practice's performance. Almost half the respondents across all professionals were surprised with their performance on at least one indicator. Of these, most believed their performance was worse than expected (15/25 respondents) which led to either concern or to further enquiry. There were no major differences across professional groups in this respect. For those who performed better than expected, their results were seen as encouraging.

Credibility of dataAt the end of the day it was interesting, but there was a big hole in terms of the amount of information that was easily available. We believe that a lot more of the information was there but that it's not easily gleanable, for example, blood pressure is being recorded but not Read coded” (general practitioner 4).

I know that patients with ischaemic heart disease have been buying aspirin across the counter and we haven't bothered to Read code it” (general practitioner 20).

Read codes are a pain in the neck! You can get people walking in space and flying airplanes, but you can't get the exact Read code for an overdose or something” (practice manager 9).

ExpectationsOnly 73% (of those with hypertension) have had their blood pressure recorded in the past five years. You would expect it to be 100% wouldn't you? It's diabolical. If you have got someone with hypertension and you haven't checked their blood pressure for five years then that's no good. It's quite surprising that there isn't a bigger percentage because one imagines that one is doing a fantastic job, then when you actually see it in writing you think oh that is not quite as good as you think. I am sure that this sort of presentation really winds you up to do better” (general practitioner 25).

One of the surprises was the fact that we seemed to be poor treaters of hypertension or were perhaps not aggressive enough treaters of hypertension. Perhaps we've been satisfied with a higher reading when the doctors should have been a bit more aggressive in their control of hypertension” (practice nurse 5).

Comparative dataThe most interesting things were the comparative figures between practices as far as I was concerned. It puts your practice in perspective to all the others, or gives you some vague idea of how you are performing” (general practitioner 28).

It is helpful to be able to compare to local means and see whether you are doing a bit better or worse, and that is perhaps one of the strongest ways of getting GPs to alter things, because they do like to be seen to be doing things a bit better than their colleagues on the whole” (general practitioner 12).


Initiatives to improve data quality

The most common reaction to our intervention across all professional groups was a recognition of the need to increase the amount of data recorded and to improve the uniformity of recording, particularly in the use of Read codes. The main reason for this was the importance placed on being able to demonstrate to other practices within the primary care group that their own practice was providing good quality care: “Everyone has taken on board that we cannot say we're doing it, we have to demonstrate we're doing it—we have a little way to go” (practice nurse 8). A less common reason was that improving data quality may improve the quality of care given: “One patient had nothing recorded on the computer that she was hypertensive, although she had had a series of high blood pressure readings over the last five years. The hypertension code was added in and that triggered me to review her and I have referred her to the GP. So just having that code was a trigger” (practice nurse 5).

Almost half the practices (n=7) reported devising ways of improving their systems of recording information. Within three of these practices the lead doctors made changes to their computer system to improve the accuracy of data entry work—for example, changing blood pressure readings from the nearest 5 mm Hg to the nearest 2 mm Hg. These practices were well computerised. A further three practices attached Read-coded tags to the notes of patients with ischaemic heart disease and two of these practices instructed data clerks to input doctors' handwritten notes onto their computer (box 6). The practices that introduced paper based systems of recording were less well computerised. Within one well computerised practice the nurses started to record the medication of patients with ischaemic heart disease in a different colour in the patients' notes to ease data retrieval.

Responses to the question: “Did anything happen as a result of the presentation?” within one practice (number 6)practice manager's perspectiveThe three of us—the practice nurse, the doctor and I—started making tickets to put on the patients' notes so that we can identify patients that fall into your categories. We are fighting to get as much on the computer as we can because we have a `clean' system and so it is down to us to make sure we identify patients' problems and put them on (the computer)”. practice nurse's perspectiveAfter the presentation we (doctor, manager and nurse) discussed the fact that we're not very good at recording all our information, and not recording it in a way that was retrievable. We've made moves to tighten that up. The manager and I had a second meeting to discuss Read codes and how we're going to get information on the computer. The manager had a discussion with the doctor to find out how he was going to record his information because he doesn't always put it on the computer. They decided when he came across CHD patients that he would mark the notes in a certain way and put them aside for someone else to put on the computer”. doctor's perspective (single handed gp)Doctor: We have tightened up our computer techniques a bit. When I say `we', I mean the practice manager and nurse. I can't remember what the nurse said after the presentation, she seemed to think that some of the information on computer was not retrievable, and she wanted a slightly different system for getting some of these pieces of information off the computer. I made some promises to be a better boy (about recording) which I haven't actually followed through yet. I hope, perhaps, to do it in the future. Q: Why didn't you follow through?

Doctor: I think it was time constraints, the practice manager has been away for a while and I haven't had the time to do it. ”

The lead doctors from five practices requested the data we used to generate their practice's results and, of these, three initiated an audit to validate the data. These practices were well computerised. One of these practices, which was single handed with a patient list of less than 3000, audited all the data and recalled patients with established ischaemic heart disease for a review of their medication. Initiatives to examine or improve data quality were conducted mainly by practices that were well computerised. These practices did not differ from practices that did not implement these changes in terms of the average age of doctors, qualifications, or number of general practitioner partners within each practice.

Changes to clinical practice

A common assertion amongst the clinically trained professionals was that they were already aware of the importance of the prevention of secondary coronary heart disease. Even so, almost half the doctors and one third of nurses believed that the intervention had reinforced the importance of these issues through increased awareness. One doctor said: “It was a powerful motivator to know the number of lives that could be saved by the intervention. You need this motivation .. to bring it into your conscience that you may be letting your standards drop” (doctor 9). Nine doctors, of whom three were lead doctors, and one nurse stated that they had verified at least one aspect of their clinical practice to ensure that they were practising correctly. The most common initiative was to check whether their patients with ischaemic heart disease were on aspirin, had their smoking status recorded, blood pressure checked, and cholesterol measured.

A fragmented team approach to change

A common theme was the lack of a team approach to change. Four practice managers and three nurses chose not to attend our presentation. Of those who did attend, two managers and two nurses in four practices were unaware of subsequent meetings between the general practitioners in their practices at which they discussed indicator results. Differences between professional groups in terms of responsibility for data entry work may explain, partially, the lack of a team approach: “It's been difficult to get (the GPs) to use a disease register especially for diabetes—I don't know whether they feel threatened by me as a nurse” (practice nurse 4). “Doctors are no good at boring repetitive data entry work, but nurses are much better” (general practitioner 17). “I may want the doctors to use a Read code so that I can draw off information but they often decide to go their own sweet way that week” (practice manager 1). Poor communication may also partially explain this lack of a team approach: “We haven't been very good at communicating to the nurses what blood pressures we would want to know about or follow up” (general practitioner 19). “The doctors have obviously spoken about these indicators and haven't involved us (the nurses), I only looked at the aspirin because I knew you were going to interview me. It does concern me that nobody has taken the time to ask us what we are going to do” (practice nurse 1).

Eleven of the 15 practices devised some form of action plan for change. Plans to change were informal verbal agreements devised by one or two enthusiasts who were usually doctors. Plans were not elaborate, most focused on one area requiring change, and none identified potential obstacles to change. The necessity of devising a plan was questioned across all professional groups, mainly because areas requiring change were seen as intuitively evident. A further reason for this was a perceived lack of time to prepare plans. These reasons may also explain partially why commitment and enthusiasm for plans varied (box 6). Dissemination of plans was ad hoc and informal. In two practices neither the practice manager nor the nurses were aware of initiatives to change despite general practitioners' communication to the contrary. Eleven respondents, mainly doctors, across six practices described change within their practice as “individualistic” in nature. The six practices included the four with no plans to change.

Barriers and facilitators to change

We have already mentioned some barriers to change—for example, difficulties in understanding indicator related terminology or concepts. Box 7 contains a summary of the barriers and facilitators believed to influence change based on respondents' self-reports on how change occurred. The credibility of the indicator data, and the extent to which participants were willing to act upon them, were associated with the presence or absence of certain indicator attributes. The development of plans for change and their dissemination were associated closely with the availability of human and financial resources. For instance, the most common barrier to change across all professions was the perception of a lack of time to dedicate to indicator related work. Financial concerns were highlighted mainly by doctors. Eight of the 15 lead doctors and 14 of the randomly selected doctors were concerned about the cost of using indicators on drug budgets. Their comments focused mainly on the use of lipid lowering drugs.

Attributes of the indicators

Extent to which indicators were seen as:

  • Evidence-based

  • Inclusive (i.e. extent to which perceived to cover important areas)

  • Reflecting current knowledge, for example, on threshold values

  • Clearly defined e.g. disease types

  • Representing an “open” rather than “hidden” agenda (associated with trust)

  • Based on reliable complete data (associated with organisational systems for data use and their perceived level of efficiency and effectiveness)

Factors at the practice level

Development of plans for addressing indicator related issues were associated with:

  • Understanding of indicator related terminology and associated concepts within team

  • Importance attached to indicator data

  • Agreement amongst team on the purpose, benefits and importance of indicators

  • Whether indicator results highlight new issues or areas for concern

  • Interprofessional communication of indicator results

  • Resources (see below)

Dissemination of plans was associated with:

  • Existence of a “product” champion to enthuse and educate

  • Enthusiasm/interest for the indicator data and the related topic area

  • Resources (see below)


  • Amount of time available for interpreting and acting on indicator data

  • Practical support and clarity of role allocation for data entry/audit work

  • Capital available, for example, if improved uptake linked to extra costs or IT training

  • Current state of information technology in practice and available resources for upgrading

Factors external to practice

  • Access to expert advice from, for example, pharmacist, public health doctor, secondary care

  • Competition between practices

  • Extent to which indicators relate to or “fit in with” local initiatives, national policies (e.g. clinical governance, NICE) and primary care group initiatives and policies, published literature

A strong facilitator for change was the extent to which the indicator represented a personal interest or an allocated responsibility. At a practice level, practical support, including expertise in information technology and the efficiency of systems and structures for data use were important factors in knowing how to deal with indicator results. Beyond the practice the extent to which the indicators accorded with other ongoing local and national initiatives was important in terms of increasing the status or relevance of the indicator results.


Our main findings were that barriers to the use of the indicators were their data quality and their technical specifications. Nevertheless, the indicators were sufficiently credible to prompt most of those in primary care teams to reflect on some aspect of their performance. The most common response was to improve data quality through increased or improved accuracy of recording. There was a lack of a coordinated team approach to decision making. Primary care teams placed little importance on the potential for performance indicators to identify and address inequalities in services between practices. The most common barrier to change was a lack of time and resources to act upon indicators.


Our way of presenting back data on a set of performance indicators through audit, feedback, and educational materials represents only one possible approach. Whether the purpose of the indicators is to set minimum standards, reward good performance, or punish poor performance may influence the responses of potential users.2 The absence of specific incentives to change, either positive or punitive, meant that responses were purely voluntary. This may not be the case with future performance indicators. The nature of feedback on performance may also influence outcome—for example, others have indicated that the identity of the person giving feedback during practice visits influenced responses.1213 Despite these potential limitations, previous research would suggest that the multifaceted nature of our intervention would be more likely to be effective than a single intervention.1415 We did not check the respondents' accounts of how they reacted to the indicators against what actually happened within each practice.


Differences between professional groups in responding to indicators

Differences in meanings, relevance, and importance attached to performance indicators across professional groups may influence the impact of future performance indicators. In our study, practice managers and practice nurses speculated less than the general practitioners on the implications of using performance indicators which may indicate a lack of interest in, or perceived relevance of, performance indicators to their everyday work. Their relative indifference, coupled with the potential for misunderstanding, may present major obstacles for primary care teams in working effectively in response to performance indicators. Furthermore, most communication about the indicators took place within, rather than across, professional groups, and was generally unplanned and based on informal verbal agreement. There is little evidence on the effectiveness of such informal communication. However, a recent NHS review15 provides evidence that more structured communication mechanisms, including written plans for change, enable knowledge sharing and aid the process of monitoring, evaluating, maintaining, and reinforcing change. The paucity in developing team based plans was linked mainly to a lack of time, but may also reflect the individualistic nature of some general practitioners or indicate poor access to people with appropriate knowledge and skills.

Variation between practices

The variation in data quality on computer systems across practices7 resulted in varying levels of feedback. Variation in data quality has been reported in studies that audited similar topics1617 and is likely to exist between practices in the 481 English primary care groups. Whilst variation in data quality exists, what constitutes an appropriate response to performance indicators will also vary widely. Our study suggests that better computerised practices are in a more advantageous position to improve their data systems or may simply be more motivated to do so.

Limitations of evidence-based performance indicators

Although the evidence base of the indicators was not contested, other attributes were criticised. Research suggests that the more an organisation uses performance indicators to examine performance, the more reasons providers of health use to discredit the validity and reliability of potentially threatening information.18 Others propose that health professionals aim to retain substantial autonomy over their work and resist external interventions.19 In our study, criticisms regarding data quality may signal a deeper distrust of performance indicators such as the fear of declining clinical autonomy. However, our data were of sufficient credibility or interest to prompt a review of data systems in several practices.

Variation and inequalities in practice

Other studies have shown that improvements in care must be linked with incentives and strategies for change.20 An assumption made in the Government's approach to performance management in primary care is that variations in care are unacceptable21 and that increasing accountability through indicators and clinical governance will be an incentive for change. Our findings suggested that primary care teams did not place much importance or interest on using performance indicators to identify and address inequalities between practices. This indifference may reflect a tendency within primary care to limit perceived responsibility to patients on their own practice list rather than considering their contribution to the health of the wider population. In another study only a minority of primary care workers agreed that it was desirable to try to reduce variations in health care.22 Strategies to improve the quality of primary care will be less likely to succeed if primary care teams are not persuaded of the importance of reducing unacceptable variation.23 This may be an important issue as reactions to performance indicators could initially increase variation between practices and primary care groups.


Primary care teams, practices, and individuals will be at different stages of development in their skills, but we suggest that the National Health Service Executive and Department of Health needs to address the following questions, all of which have training and resource implications and require further research and development on encouraging:

  • further standardisation of data recording and retrieval in primary care building on the many requests and attempts to do so;2426

  • improvements in understanding of all those using performance indicators (especially chief executives, board members and clinical governance leads in primary care groups);

  • primary care groups to produce locally owned strategies to reduce inequalities in access to effective health care and variation in practice;

  • practices to work as teams when implementing change.


Primary care performance indicators could initially increase apparent variation between practices by encouraging well organised practices to further improve their health care whilst those with few computerised data continue to find it difficult to even enter data, let alone respond to it.

For the effective implementation of national service frameworks in primary care there are many barriers to overcome at individual, practice, and primary care group levels. Addressing these barriers might help the Government to meet its aims to performance manage primary care, to reduce inequalities in access to effective health care, and to reduce unacceptable variation between practices.

Royal Society of Medicine Millennium Quality Improvement Travel Fellowship Sponsored by the Forum on Quality in Health Care

The Royal Society of Medicine Forum on Quality in Health Care is inviting applications for its annual Millennium Quality Improvement Travel Fellowship. The Fellowship is an award of up to £1500 to enable the recipient to travel either nationally or internationally to study the theory and practice of quality improvement in health care. It is open to anyone working in health care in the UK and there is no requirement to be a member of any particular profession or organisation. We would especially welcome applications from those who would not normally get such an opportunity through their work.

How to apply There is no standard application form to complete. Send a current curriculum vitae which highlights your work on or contribution to quality improvement along with a statement of up to 600 words maximum outlining how you would propose to use the travel fellowship to further your own personal development and your work on and contribution to quality improvement in health care to: Linda de Klerk, Academic Office, Royal Society of Medicine, 1 Wimpole Street, London W1M 8AE. Telephone (020) 7290 3942. Fax (020) 7290 2989. Email: Quality{at}

The closing date for applications is 30 November 2000. All applicants will be notified of the results of the award.


Funding: This study was funded by the Department of Health.

Conflict of interest: none.