Intended for healthcare professionals

Learning In Practice

Revalidation for general practitioners: randomised comparison of two revalidation models

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7441.687 (Published 18 March 2004) Cite this as: BMJ 2004;328:687
  1. David Bruce, director of postgraduate general practice education (d.bruce{at}tcgp.dundee.ac.uk)1,
  2. Katie Phillips, project officer1,
  3. Ross Reid, associate adviser1,
  4. David Snadden, professor2,
  5. Ronald Harden, professor of medical education3
  1. 1 NHS Education for Scotland, Tayside Centre for General Practice, Dundee DD2 4AD
  2. 2Northern Medical Program, Universities of Northern British Colombia and British Colombia, Prince George, BC, Canada V2N 4Z9
  3. 3Centre for Medical Education, University of Dundee, Dundee
  1. Correspondence to: D Bruce
  • Accepted 16 January 2004

Abstract

Objective To compare two models of revalidation for general practitioners.

Design Randomised comparison of two revalidation models.

Setting Primary care in Tayside, Scotland.

Participants 66 Tayside general practitioners (principals and non-principals), 53 of whom completed the revalidation folders.

Interventions Two revalidation models: a minimum criterion based model with revalidation as the primary purpose, and an educational outcome model with emphasis on combining revalidation with continuing professional development.

Main outcome measures Feasibility and acceptability of each approach and effect on the doctor's continuing professional development. The ability to make a summative judgment on completed models and whether either model would allow patient groups to have confidence in the revalidation process.

Results The criterion model was preferred by general practitioners. For both models doctors reported making changes to their practice and felt a positive effect on their continuing professional development. Summative assessment of the folders showed reasonable inter-rater reliability.

Conclusions The criterion model provides a practical and acceptable model for general practitioners to use when preparing for revalidation.

Introduction

In the United Kingdom, a doctor's licence to practise is secured by registration with the General Medical Council. Periodic revalidation, to start in spring 2005, will be the regular demonstration by doctors that they remain fit to practise, and the process by which a doctor's licence is maintained.1 The UK approach will be to link revalidation with continuing professional development though annual appraisal or an independent route, with both requiring doctors to be able to provide information that their workplace activities are above the standards for “fitness to practise.” Doctors whose submissions are either absent or below fitness to practise standards will undergo GMC performance assessment. Their registration will then depend on their satisfying the GMC fitness to practise procedures.2

Background

However, although the GMC's role is to set the professional standards that general practitioners will be revalidated against, it is unable to prescribe either what information needs to be collected or how the process should work. As a result of this, general practitioners lack a clear guide or model to use to show that their practice is satisfactory for revalidation purposes. In order to gain insight into this problem, we recruited general practitioners in Tayside, Scotland, to develop, implement, and evaluate two models for revalidation. The models were a minimum criterion based model, with revalidation as the primary purpose, and an educational outcome model, with emphasis on combining revalidation with continuing professional development.

We were interested to find out the effect of each approach on the doctors' continuing professional development, whether either of these models could be used as a basis for a summative judgment, the feasibility and acceptability of the process to general practitioners, and whether either would allow patient groups to have confidence in the revalidation process.

The study ran between November 2000 and January 2003. When this study started the aims of revalidation were to ensure public confidence in doctors,3 promote maintenance of competence and continuing professional development, and detect poor performance.4 The objective of detecting poor performance caused some concern to study participants, who had difficulty reconciling this with the formative developmental objectives of demonstrating fitness to practise by means of a portfolio. Issues around introduction of revalidation to the profession are outlined in our companion paper on bmj.com>, which details the development and implementation of the two revalidation models.

Participants and methods

This study involved three phases. The development phase (from November 2000 to August 2001) involved recruitment of participants, development of revalidation models, and desktop publishing of the models. The implementation phase (September 2001 to June 2002) covered the implementation of the models. The evaluation phase (July 2002 to January 2003) comprised evaluation of the process and assessment of the completed models.

Participants

All 340 general practitioners (principals and non-principals) registered on the databases of Tayside Primary Health Care Trust, the local faculty of the Royal College of General Practitioners, and the GP Postgraduate Unit were invited by letter to take part in the two year study. We set up evening meetings to explain the study and recruit participants. The two models were developed by two groups of volunteer general practitioners and “key stakeholders” (representatives from Tayside Health Council (patient representatives), the Royal College of General Practitioners, each of Tayside's three local health care cooperatives, the local non-principals group, and secondary care).

The revalidation models

Criterion model—For this model we used a method suggested by the Royal College of General Practitioners and based on Good Medical Practice for General Practitioners.5 We grouped the unacceptable attributes of a general practitioner under the seven headings of good medical practice, and for each group of unacceptable attributes we created a positive criterion statement. Information that general practitioners could collect in their portfolios was then decided, along with the pass point (standard) if it could be specified (see example in box 1).

Educational outcome model—On the basis of the Dundee outcome model,6 we modified the 12 outcomes determined for medical practice to reflect the specific tasks and competencies of general practice. Conceptually, this model looks at the tasks that a doctor does (technical intelligences), the deeper understanding needed for those tasks (intellectual intelligences), and the professionalism of the doctor (personal intelligences). Within each outcome, we specified broad statements of required practices as “givens.” Information to be collected by general practitioners for their folders was specified and standards determined (see box 2).

For both models, standardised forms for structured presentation of the information were agreed and made available on the postgraduate department website.7 Full details of the development and content of the models are given in our accompanying paper on bmj.com>.

Data collection

The study developed two types of data; assessment data from completed revalidation folders and evaluation data from participants.

Assessment data

Completed folders were anonymised. We then added two quality control folders that contained inadequate evidence of fitness to practise in order to test the robustness of the assessment process. Each folder was assessed separately by two general practitioners in the same implementation group (peer assessment). They assessed the folders as pass, problematic but pass, or fail. We included the category “problematic but pass” to aid the decision making process, so that assessors could record that information was thin but met the minimum standards to pass revalidation. A second assessment was made by an “external assessment group” comprising a patient's representative, a senior doctor nominated by the study group, and a doctor working in medical education. This group assessed a sample of the completed folders, any problem folders, and two quality control folders (12 in total). They assessed the folders as pass or fail. In addition, the GMC technical group assessed 10 folders, comprising a quality control folder, a problem folder, and a random sample of the other folders.

Box 1: Example of a criterion statement, information to be collected, and standard created for criterion revalidation model

Unacceptable attributes (GMC heading of good clinical care)

Doctor has no way of organising care for long term problems or for prevention

Criterion statement

Doctor provides continuing care for chronic medical problems

Information to be collected

Doctor provides a management plan illustrating the care of a patient with chronic disease

Standard

Protocol completed and referenced to local or national standards

Box 2: Example of givens, information to be collected, and standard created for educational outcome revalidation model

Givens for outcome 1—clinical care (history taking)

Doctor is able to elicit adequate clinical details to formulate a diagnosis

Ensures no serious condition is missed

Considers social and psychological factors

Makes effective use of time

Information to be collected

Observation of five consultations with colleague or

Case report from a consultation or

Patient satisfaction survey

Standard

Forms for choice of information completed

Evaluation data

We collected data from participants at each of the stages of the study using feedback forms, structured piloted questionnaires with open questions, closed responses using a 5 point Likert scale, and semi-structured interviews. The results, which were a mixture of quantitative and qualitative data, allowed exploration of doctors' perceptions of the two models and difficulties encountered, comparative data, and estimates of time taken to complete models. We analysed the quantitative data using SPSS and analysed qualitative data for content to develop major themes.8

Results

A total of 72 general practitioners indicated initial interest, and 45 volunteered to develop the two models. A further 24 indicated interest in piloting the completed models. Sixty six doctors started the implementation phase: two dropped out at the start for no known reason, and three dropped out because of serious personal illness, giving a total working sample of 61. Eight failed to hand in folders and did not provide feedback as to their reasons for dropping out. Of the 13 doctors who dropped out, 10 had been involved with the study since the start (that is, had developed the models), and three had joined the study at the implementation phase. The remaining 53 handed in completed folders for peer and external assessment. Table 1 shows the demographics of the study groups.

Table 1

Demographics of general practitioners who participated in implementation of two revalidation models. Values are numbers of doctors

View this table:

The time taken to complete the folders varied widely from less than 20 hours to more than 40, with the education outcome model requiring slightly more time (table 2).

Table 2

Time taken for general practitioners to complete revalidation models. Values are numbers (percentages) of participants

View this table:

Assessment of completed folders

Peer assessment of the 55 folders showed a good degree of inter-rater reliability (κ = 0.66). The assessment made by the external group on 12 folders showed moderate agreement with the peer assessment (κ= 0.43). Of the 12 folders that were marked three times (once by each of two peer assessors and once by the external group), two were marked as doubtful in two of the three assessments. The quality control folders were identified as substandard in five of their six assessments (each folder had two peer and one external group assessments). The GMC technical group confirmed that, of the 10 folders they sampled, nine provided sufficient information to support revalidation and one (a quality control folder) contained inadequate information.

Evaluation data

Doctors who had helped to develop the models valued the protected time to work with their colleagues and patient groups. Those who developed the simpler criterion model found the tasks and processes clear but had difficulty in trying to define and measure good practice. Doctors who developed the educational outcome model enjoyed the learning and educational theory but felt that more guidance, leadership, and facilitation were needed. There were reported changes in practice with updating of medical bags, improved record keeping, and increased audit activity.

The study design allowed collection of data from three groups of doctors implementing the models (table 3). All who developed a model found it easy to follow when implementing it. However, those who, after having developed the criterion model, switched to implementation of the more complex educational outcome model found it difficult to grasp, and the new recruits found both models complex to understand.

Table 3

Subgroups of general practitioners who implemented the two revalidation models. Values are numbers of participants allocated to study groups (from the initial 66 doctors) and those who completed the study (the 53 doctors who handed in folders)

View this table:

Despite their reservations and early confusion, once the doctors began collecting information for inclusion in their folders, feedback at progress meetings indicated that most found the task straightforward and relatively easy. The doctors had a range of information available for inclusion in their folders (table 4). Though all doctors collected a wide variety of data to illustrate their practice, they had clear views and preferences about these. Observation of practice, clinical audit, and analysis of prescribing data were considered the least feasible information to collect: the need to involve partners and to obtain data for prescribing and audit activities required both planning and effort. Observation of practice, patient satisfaction survey, and peer survey were the least acceptable data to collect: each of these involves external opinions of a doctor's practice. When asked which information provided the most robust data of their performance, participants rated observation of practice, medical records, and referral letters as the highest.

Table 4

Information collected in general practitioners' revalidation folders. Values are numbers (percentages) of participants who included each type of information

View this table:

Non-principal general practitioners had problems getting staff to cooperate with information gathering and felt marginalised by a practice based emphasis in the data. Lack of access to prescribing data and difficulty in audit were also key issues for non-principals.

Participants felt that the educational outcome model involved more work and effort than should be required for revalidation purposes, but also found it enjoyable. Almost half of the doctors involved with either developing or implementing this model indicated that they would be interested in developing it further, to diploma or masters degree level.

At completion of the folders, though no difference was reported in ease of understanding the models, we found differences in ease of implementing the models and their acceptability, with the criterion model being favoured (table 5).

Table 5

Ease of implementing revalidation models and their acceptability. Values are numbers of participants

View this table:

Doctors reported that both models had a positive effect in encouraging their continuing professional development. Although we have no hard evidence of changes in their practice, most doctors completed their folders using a wide variety of data that included patient and peer surveys and observation of their practice by colleagues (table 4). Such educational activities are rarely presented for postgraduate accreditation in Tayside. The choices of information collected in the doctors' folders were similar for both models.

Discussion

In this study we developed two models of revalidation for general practitioners. These were acceptable to doctors and achieved the aims of encouraging continuing professional development, detecting poor performance, and assuring patients that doctors successfully completing the process are competent practitioners. The models provided a structure which allowed general practitioners to demonstrate their fitness to practise by selecting from a menu of information choices. The preferred criterion model was found to be feasible and acceptable to general practitioners.

Potential limitations of study

The study group was recruited from those who volunteered on a single invitation to all general practitioners in Tayside. However, this group was generally representative as it included 20% of the general practitioners in the region from differing backgrounds, including many not traditionally seen at postgraduate educational or research meetings.

The timescale of this project was tight, with doctors completing their folders over nine months. This contrast with the five year revalidation cycle. As a result of the time pressure, any comparisons between a complex and simple model are likely to favour the simple model.

Lessons learnt

The high completion rate may be explained by three factors. Both models were developed by general practitioners(plus key stakeholders), enhancing ownership.11 Considerable support was given during implementation, with meetings, examples of evidences, standard forms available on the internet, and independent organisation of patient and peer surveys. The models were based on Good Medical Practice for General Practitioners12 with attributes grouped under the seven headings of good medical practice, making the rationale for providing evidence clear to participants.

Summative assessment of portfolio work, though used in undergraduate education,13 has been problematic in postgraduate general practice.14 In our study assessment of both models showed good inter-rater reliability. This suggests that if doctors collect information about their practice in the standardised format used in these models and that they are assessed against clear criteria and standards, then decisions whether to recommend revalidation can be made with some confidence. Although the models allowed the participants a choice as to what information to include in their folders, each choice was clearly defined as to what should be included and what standard was acceptable.15

We attempted to be inclusive of non-principals, a growing group of general practitioners who are often neglected in educational matters.16 However, the problems they encountered show that more flexibility in folder content is needed.

Both revalidation models had a positive effect in encouraging continuing professional development. As the educational model had generated more interest in education and learning at the development phase, we wondered whether it would encourage more reflective practice. One postulated measure of increased critical reflection by doctors was the use of the “external” data (observation of practice, peer review, and patient satisfaction surveys), but we found no differences between the models in such use.

Doctors who had developed the criterion model and then changed to the educational model, and new recruits to both models, initially needed time and support to grasp the concepts but were then able to gather information for their folders without problems. This suggests that, for those using either the appraisal or independent route for revalidation, peer or educational support will be required.

Further developments

The simpler criterion model was the preferred choice in our study, and this model has been used to inform development of the Scottish revalidation folder.17 The Scottish revalidation folder has now been distributed to all GP principals in Scotland and is suitable for use either as part of the Scottish appraisal process or the independent revalidation route. A separate revalidation toolkit incorporates work from this study and offers practical guidance to doctors preparing for revalidation. Both are available on the RCGP Scotland website.18 Developments based on this study have resulted in general practitioners having a practical and acceptable model to use when preparing for revalidation.

What is already known on this topic

UK doctors' professional standards of practice are made explicit in Good Medical Practice

Folders of evidence will be used to show doctors' competence, but no validated models exist for how to carry out revalidation by this method

Portfolio assessments have been developed for undergraduates, but they have poor reliability for postgraduates because of the varied nature of their content

What this study adds

The study developed two revalidation models: a criterion model, with revalidation as the primary purpose, and an educational outcome model, which combined revalidation with continuing professional development

The summative assessment of the folders was reasonably reliable

The simpler criterion model was preferred by participating doctors and has informed development of the Scottish revalidation folder

Acknowledgments

We thank Miriam Friedman, Ben David, and Jennifer Laidlaw of the Centre for Medical Education, University of Dundee, for methodological support and Peter Donnan of Tayside Centre for General Practice, University of Dundee, for statistical advice.

Footnotes

  • Embedded Image A companion paper giving details of the development and implementation of the two revalidation models is on bmj.com>

  • Contributors DB was the principal investigator, who conceived and developed the original idea, led the study, and prepared the manuscript. KP coordinated the study, organised the questionnaires, carried out the interviews, and contributed to analysis. RR contributed to all stages of study, including analysis, and contributed to manuscript preparation. DS helped develop the original idea, advised on the study throughout, sat on the expert group, and prepared the final manuscript. RH helped with initial methodological design and supported the project throughout. DB is guarantor for the study.

  • Funding Main funding was from NHS Education for Scotland with supplementary funding from Tayside Primary Care Trust and Tayside Centre for General Practice Postgraduate Funds.

  • Competing interests I was first chairman of the ILAE/IBE/WHO Global Campaign against Epilepsy. I have received reimbursement for consultation on biomedical aspects of epilepsy.

  • Ethical approval Not required.

References

View Abstract