Objective—To develop a questionnaire to assess audit activity and to use it to evaluate systematically the quality of audit in obstetrics and gynaecology within NHS hospitals in the UK.
Design—Retrospective review of 212 consecutive questionnaires completed at hospital recognition committee visits for training accreditation, between 1 January 1993 and 31 August 1998, validated against hospital trust annual audit reports.
Main measures—Use of seven quality criteria developed within the Royal College of Obstetricians and Gynaecologists clinical audit unit and also assessment of support for audit and participation in regional and national audit. Results were compared between 1993/4 (n=72), 1995/6 (n=72), and 1997/8 (n=68) for evidence of improvement.
Results—After modifications to the questionnaire the version used from 1993 proved to be a satisfactory tool with minimal need for subsequent change. The results showed that there has been a significant improvement in the quality of obstetric and gynaecology audit with time (p<0.0001) with 36 (53%) of departments in the previous two year period meeting all seven criteria. Similarly by this stage, 60 (88%) of departments had reached the stage of re-audit and 55 (81%) had conducted patient satisfaction surveys, both of these having significantly improved with time. Critical incident monitoring also became used more widely with time. Validation of topics audited was possible for 45% of hospitals where trust annual audit reports were available and these showed a high level of correlation.
Conclusions—It has proved possible to conduct an audit of audit using the current system of hospital recognition visits for training accreditation. This has shown a great variety in the depth and breadth of audit that is being undertaken within individual obstetric and gynaecology departments. Since 1993 there has been an improvement in the quality of audit programmes undertaken, in particular in the number of hospitals carrying out critical incident monitoring, patient satisfaction surveys, and re-audit. This should be associated with improvements in staff training and in patient care.
(Quality in Health Care 2000;9:37–41)
Statistics from Altmetric.com
The assessment of the quality of health care is fraught with difficulties. Even more difficult is the assessment of the quality of the many quality improvement programmes and activities that currently exist. Yet in England the government has stated that it is “... determined to place quality at the heart of healthcare”.1 Many factors can influence the quality of healthcare ranging from government policy to clinicians reviewing and updating their practice. In obstetrics and gynaecology there have been various high profile cases where the quality of care has fallen below nationally accepted standards such as in the case of cervical screening. In these situations both the public and the media can exert a huge influence over the quality of care provided.
Clinical audit is seen as a vital and essential component for improving the quality of care, although its usefulness has been questioned by some.2, 3 The role of audit within the new quality culture, such as the clinical governance agenda in England, needs to be reviewed. Monitoring audit activities at a local level should identify well developed audit programmes and methodologies, which exist in some hospitals. These can then be disseminated to a more global audience to allow the widescale development of high quality clinical audit. In addition, review at a local level may reveal hospitals requiring help to enhance the quality of their audit work. Comparison from year to year gives reassurance that good practice is increasing. Review of audit is difficult, but it is necessary not only to monitor the quantity but also the quality of audit to determine whether clinical audit really is achieving the improvements in the quality of patient care envisaged at its introduction.4, 5
The medical royal colleges in the UK are charitable institutions whose objectives are to improve patient care. They are not trade union type organisations. With the development of audit, and specific government money being given to the royal colleges to assist in the process, the Royal College of Obstetricians and Gynaecologists (RCOG) set up a specific clinical audit unit. The unit decided that monitoring of the quality and quantity of audit might be achievable through adding this to the agenda for the regular visits which their hospital recognition committee make to all hospitals. This would allow “audit of audit” to be conducted. This paper describes the development of this audit process, the results of over five years of its use, its possible future development, and its wider applicability.
Hospital recognition visits
The RCOG, in line with other colleges and faculties, has a hospital recognition committee which was established in 1944. Representatives of this committee visit all hospitals requiring recognition for training in obstetrics and gynaecology in the UK approximately every five years. The recognition visit is conducted by two consultants from another locality, usually one is relatively senior and the other relatively junior. A prearranged format is used, with much of the information required being filled up on specific forms in advance. During the day long visit there is an opportunity to check on the data supplied by reviewing the facilities and interviewing staff. The interviews which are conducted with senior staff, trainees, and health professionals in related disciplines (for example, anaesthesia) are done in total confidence. A report is produced after the visit. The ultimate sanction of the report is to take recognition of training away from a department, which would have the effect of closing the department as the service is dependent on trainees. This would not normally be applied without a warning, and usually a set of requirements are produced and the department will have perhaps one year to address the problems and then a further visit will take place. Areas causing concern might be inadequate supervision of trainees, insufficient number of cases of particular conditions, inadequate library, or accommodation facilities.
With the setting up in 1991 of the RCOG clinical audit unit it was decided that an attempt to audit the standard of audit nationally could be undertaken using the existing RCOG review structure of the hospital recognition visit. After discussions with the chairman of the hospital recognition committee it was agreed that an audit questionnaire would be developed which would be filled in with the other forms in advance and discussed at the time of the visit. After the visit the completed questionnaire would be sent to the RCOG clinical audit unit for analysis.
DEVELOPMENT OF THE QUESTIONNAIRE
To date there have been six versions of the questionnaire. The first two versions used in 1991 and 1992 reflected the then climate of medical rather than clinical audit and also looked at resources which were present to support audit. In 1993 there was a change of emphasis of audit with the term clinical audit becoming more widely used, reflecting that in all areas of medicine the giving of care was a multiprofessional activity. In addition, audit was moving from simple presentation of results, to attempting to change practice and re-auditing of the topic. Furthermore, the need for the patient to be at the centre of the process was being appreciated. Accordingly, version 3 of the form, introduced in 1993, had in addition to the questions on the previous form, questions relating to these newer concepts (table 1). To get a better idea about the type of audits being undertaken and whether change was taking place, an additional form was developed. This was designed so that the topics audited in the previous year could be listed, and against each were columns for the professional group and grade of the staff involved in the audit, the changes in practice recommended as a result of the presentation, whether it was a first time audit or a re-audit, and when, if indicated, the topic would be re-audited. It was felt that this questionnaire would allow more insight into both the quantity and quality of audit. Subsequently, the only changes there have been to the questionnaire, which have been made as a result of feedback and analysis, are ones of layout and improved wording to facilitate accurate responses. As a result, it is possible to analyse together all versions from 1993 onwards. A few hospitals had repeat visits during the time period of the study, and when this occurred the second questionnaire was used rather than the first. Seventy two questionnaires related to visits between 1993–4, 72 between 1995–6, and 68 from January 1997 to 31 July 1998. A further 22 hospitals underwent recognition visits during the time period of the study, but audit questionnaires were not filled in and therefore these 22 hospitals could not be included in the analysis.
VALIDATION OF DATA
As part of the RCOG clinical audit unit's routine activity, audit bulletins are sent to all hospitals on a regular basis. In return the unit requests a copy of the hospital's annual audit report. About 50% of hospitals respond with a copy of their report. Thus it is possible to validate at least some of the activity reported on the questionnaire. It was decided to define agreement if there was an 80% concordance rate between the questionnaire and the hospital annual report.
With the developments in clinical audit in the early 1990s, criteria to assess quality and quantity of audit were developed within the RCOG clinical audit unit.6 These were circulated widely and are listed (box 1). These criteria have not been altered, and with the development of version 3 of the questionnaire it was possible to assess each of these criteria for all hospitals.
Box 1. RCOG clinical audit unit: criteria for quality audit
(A) A regular series of meetings had been held and a time table has been set of future audits
(B) The audit cycle has been completed for at least four audit topics
(C) A regular audit of case notes, discharge letters, and summaries has occurred
(D) A patient satisfaction survey has been conducted
(E) A regular perinatal mortality audit meeting had been held at a frequency appropriate to the size of the hospital
(F) Multidisciplinary audit involving professions other than midwives, paediatricians, and pathologists has occurred
(G) Where deficiencies had been shown at least one topic had been re-audited
The main objective of the study was therefore to make an assessment of audit in obstetrics and gynaecology within the UK and to try and assess the quality of audit against the RCOG clinical audit unit's quality criteria. It was hoped to compare audit activity over time to see if good practice was increasing. For this, statistical analysis was done using a χ2 test for trend; p<0.05 was considered significant.
The analysis is based on 212 questionnaires returned from different hospitals. There were a further 22 hospitals who had visits during the time period, but no questionnaires were obtained so they could not be included in the analysis. Seventy two questionnaires related to visits between 1993–4, 72 between 1995–6, and 68 from January 1997 to 31 July 1998. Teaching hospitals accounted for 19% (14/72) of the hospitals assessed between 1993–4, 15% (11/72) between 1995–6, and 16% (11/68) between 1997 and 1998.
Resources for audit include audit technology, staff, and funds. In total, 182 (86%) hospitals reported that audit staff were available and 146 (69%) had availability of technology. In the 72 hospitals visited in 1993–4, 80% had availability of staff compared with 87% of those visited in 1995–6 and 90% of those visited in 1997–8 (table 2). Sixty three per cent of hospitals visited in 1993–4 had technical backup compared with 74% of those visited in 1995–6 and with 70% of those visited in 1997–8 (table 2). None of these changes reached statistical significance.
A consultant is responsible for audit in obstetrics and gynaecology in all but one of the hospitals, where an associate specialist oversees it.
One of the criteria for quality audit (A) is that a regular series of audit meetings had been held. A total of 170 hospitals (80%) conducted audit meetings on a monthly basis or more frequently, 30 (14%) hospitals every two months, and 10 hospitals conducted audit meetings at varying frequencies ranging from every three months to every six months. One hospital conducted audit “when it felt it was required”, and another hospital had only just begun an audit programme before the recognition visit in 1993. This data is shown by year groups in table 3 illustrating changes with time.
With regard to the duration of audit meetings, 100 (47%) hospitals spend between one to two hours for each meeting, 72 (34%) hospitals spend between two to three hours, and 22 (10%) spend three hours or more. The rest spend either a variable time, or the time was not recorded on the form.
Topics studied appeared to relate to the need for a particular hospital. Examples of common topics studied in obstetrics included caesarean section, induction of labour, use of steroids before preterm delivery, management of breech presentation, and epidural services. In gynaecology, common topics included day case procedures, threatened miscarriage, and antibiotic prophylaxis.
One of the quality criteria (B) was that at least four topics should be studied in a year. Table 3 illustrates the data for this and shows that although no difference existed between 1993/4 and 1995/6, by 1997/8 there was a significant improvement with 94% of departments auditing at least four topics a year.
AUDIT OF DOCUMENTATION
Another of the criteria (C) was that there should be audit of documentation. A total of 139 (66%) audited their case notes and 144 (66%) hospitals audited discharge summaries or letters to general practitioners, or both. There was no significant change in the number of hospitals auditing documentation over time (table 3).
PATIENT SATISFACTION SURVEYS
A further quality criterion (D) was that patient satisfaction surveys should be undertaken and these were reported from only 116 (55%) hospitals. However, this percentage increased significantly over time (p<0.0001) (table 3). Details of the surveys were not always given, so it was not possible to distinguish whether they related to the gynaecology or maternity services.
PERINATAL MORTALITY AND MORBIDITY
All 212 hospitals reported that they conducted regular perinatal audit with paediatricians fulfilling the longstanding RCOG requirement and our own quality criterion (E) that regular perinatal mortality meetings are held (table 3). In addition, 111 (52%) hospitals extended their criteria and discussed cases when fetal loss was between 20–23 weeks gestation. Overall, 152 (72%) hospitals conducted critical incident monitoring such as “near miss” morbidity audit of babies, and the percentage of hospitals conducting “near miss” morbidity meetings increased significantly for the different years in the study (p<0.0001) (table 2 ).
INADEQUACIES REVEALED DURING AUDIT
In total, 142 (67%) hospitals reported some inadequacies which were documented as a result of their audit activity and stated that action had been taken to remedy them. Inadequacies included illegibility and untidiness of case notes.
Undertaking multidisciplinary audit was regarded as a key quality criterion (F) because the opportunities for unsatisfactory patient care are likely to be increased when several disciplines are involved. In addition to audit with midwives, paediatricians, and pathologists (which was considered to be essential), 101 (48%) hospitals conducted other multidisciplinary audit and this increased significantly with time (table 3). This included audit with other medical specialties (surgeons and radiologists) and some other hospital professions such as physiotherapists and ultrasonographers.
INVOLVEMENT IN REGIONAL/NATIONAL AUDITS
In total, 136 (64%) hospitals reported being involved in either regional audits or national audits. These audits included regional colposcopy audits, regional cancer registration audits, and involvement in the national audits of endometrial resection and ablation for heavy menstruation (MISTLETOE7) and the national hysterectomy audit (VALUE). These two national audits took place during only part of the period studied and had higher percentages of hospitals taking part.
STAGE OF RE-AUDIT
A key criterion for evaluation of the quality of care was that changes were being addressed and then monitored by re-audit (G). One hundred and fifty seven hospitals (74%) had, or planned to, re-audit some topics. As anticipated, there was an increase in the number of hospitals re-auditing during the years of the study (table 3) and this was statistically significant (p<0.003).
QUALITY OF AUDIT
Overall, 69 (33%) hospitals fulfilled all the criteria for good quality audit as outlined in box 1. The percentage of hospitals fulfilling the criteria increased significantly over time (p<0.0001) (table 3).
Validation of the data with those in the hospital annual audit report was possible in only 45% of cases. There was agreement (at least 80% of topics confirmed in the hospital report) in 72% of cases.
This study has shown that it is possible for a royal college to assess audit activity through use of its existing monitoring of training with its hospital recognition committee visits. Knowing that audit is in place does not guarantee that patient care is improving. Mechanisms need to be in place to ensure that not only is audit occurring but also that there is a reflection on the quality of the audit. Without this it is unlikely that there will be an improvement in the quality of care given to patients. By using the RCOG clinical audit unit criteria for quality in audit (box 1) the study has shown that the quality of audit in obstetrics and gynaecology seems to have improved over the past five to six years. This is likely to mean that patient care has improved also.
One of the limitations of the study was that, by using the RCOG hospital recognition visits as a means to distribute the audit questionnaires, it looked at different hospitals in different time periods. It may have been more appropriate to look at the same hospitals over time to look for improvements in their audit programmes. Activity may be over reported with this method of data collection because the study surveyed what the hospitals said they did, rather than what they actually did, and it may be that clinicians' perceptions of what they do is at variance with what actually happens. Total validation of the questionnaires was not possible, but where hospital annual audit reports were obtained the correlation was high. It could be claimed that these reports were also prone to the same bias of overreporting, but being an independent source should make this less likely.
The results of the current study show that the original plans from the NHS Management Executive have been fulfilled8 with consultants leading audit, a generally high frequency of meetings, and the majority of hospitals having audit staff and technology to assist audit. The overall quantity of audit has increased with time. Changes in funding arrangements for audit have occurred9 and, with these and other likely changes and pressures, it is important that these resources are not lost and are used effectively.
Audit of case notes, letters, and documentation has been long advised,10 a recommendation made more serious by today's high level of attempted medical litigation. Many claims which could be defended cannot be because of inadequate documentation. Money saved from not needing to settle unnecessary claims could be put directly towards patient care. Although most of the hospitals surveyed are auditing documentation, even in the last time period there was nowhere near the 100% which there should be. This needs urgent attention as many of the inadequacies demonstrated by audit were in the area of documentation and quality of correspondence.
Outcome audit is difficult in most specialties, but critical incident monitoring may highlight areas of concern, which should prompt investigation into ways to improve the service. Critical incident monitoring meetings appear to be becoming routine practice as illustrated by near miss perinatal morbidity audit.
Patient opinion can be used as a way of evaluating health care,5, 11, 12 and various validated questionnaires are available.13 The fact that only 55% of hospitals visited had conducted satisfaction surveys was disappointing, although the proportion of hospitals undertaking these types of surveys is increasing each year. Patient satisfaction surveys were included for both obstetric and gynaecological service users. There are validated tools for maternity services with regard to patient satisfaction, but there are few such tools for gynaecology. This may be a factor in the number of patient satisfaction surveys undertaken. More effort is needed to conduct surveys that may result in positive actions that will be appreciated by patients and all healthcare professionals. Indeed, the government aim to undertake annual national surveys of patient and user experience as “a means of measuring the NHS against the aspirations and experience of its users, to compare performance across the country and to look at trends over time”.1
Problems identified through audit require the underlying causes to be delineated, changes implemented, and then the subject re-audited. Re-audit can monitor the introduction of intended beneficial changes. By the end of the study, most units (74%) had already completed a re-audit. This was one of the suggested indicators for quality audit6 (box 1). Although all the quality features were present in only 33% of hospitals, the significant increase from 15% to 53% over time is indicative of progress likely to be associated with improvements in patient care.
The challenge is whether this improvement can be sustained and increased over the next five years as the UK NHS moves into the new era of clinical governance. Much improvement can occur just through the sharing of successful local audit methodologies and the ways that individual departments have achieved change. Databases of examples of good practice need to be readily available. In addition, new methodologies need to be developed, not just for audit itself but also for the process of auditing audit activity so that a critical assessment of audit activity can be undertaken easily. The method for assessing the quality of audit which the RCOG has developed has been found easy to apply and could be applied widely. It could be developed for use in conjunction with The Commission for Health Improvement (CHIMP). Applied nationally it should facilitate the continuing development of high quality audit. Commitment and determination to keep the audit process active must accompany this.14 Obstacles such as lack of time, resistance to change, and lack of motivation must be overcome to achieve better service for patients through the audit process.
We wish to thank the chairmen of the hospital recognition committee, and their colleagues in the postgraduate training department for their help with this project. We would also like to thank Miss Nicola Rice, secretary to the hospital recognition committee, for her help in retrieving missing hospital recognition visit forms. We would also like to thank Dr Paul Gingham and Dr Gillian Penney of the RCOG audit committee for their helpful suggestions. DS and KK were funded from the core grant to the RCOG clinical audit unit from the NHS Executive.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.