Article Text

Download PDFPDF

Assessing organisational development in primary medical care using a group based assessment: the Maturity Matrix™
  1. G Elwyn1,
  2. M Rhydderch1,
  3. A Edwards1,
  4. H Hutchings1,
  5. M Marshall3,
  6. P Myres4,
  7. R Grol2
  1. 1Primary Care Group, University of Wales, Swansea SA2 8PP, UK
  2. 2Centre for Quality of Care Research, University of Nijmegen, 6500 HB Nijmegen, The Netherlands
  3. 3National Primary Care Research and Development Centre, University of Manchester, Manchester M13 9PL, UK
  4. 4CAPRICORN Primary Care Research Network, Croesnewydd Hall, Wrexham LL13 7YP, UK
  1. Correspondence to:
 M Rhydderch
 Primary Care Group, University of Wales, Swansea SA2 8PP, UK; MelodyRhydderchaol.com

Abstract

Objective: To design and develop an instrument to assess the degree of organisational development achieved in primary medical care organisations.

Design: An iterative development, feasibility and validation study of an organisational assessment instrument.

Setting: Primary medical care organisations.

Participants: Primary care teams and external facilitators.

Main outcome measures: Responses to an evaluation questionnaire, qualitative process feedback, hypothesis testing, and quantitative psychometric analysis (face and construct validity) of the results of a Maturity Matrix™ assessment in 55 primary medical care organisations.

Results: Evaluations by 390 participants revealed high face validity with respect to its usefulness as a review and planning tool at the practice level. Feedback from facilitators suggests that it helped practices to prioritise their organisational development. With respect to construct validity, there was some support for the hypothesis that training and non-training status affected the degree and pattern of organisational development. The size of the organisation did not have a significant impact on the degree of organisational development.

Conclusion: This practice based facilitated group evaluation method was found to be both useful and enjoyable by the participating organisations. Psychometric validation revealed high face validity. Further developments are in place to ensure acceptability for summative work (benchmarking) and formative feedback processes (quality improvement).

  • organisational assessment
  • primary care
  • quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The assessment of organisational aspects of general practice is high on policy agendas, both as a means of stimulating quality improvement and achieving accreditation.1–3 In most contexts the systems of assessments are summative in that judgements are made against preset standards for deciding levels of achievement. Such assessments are typically linked to accreditation by professional bodies, often with the encouragement of the respective government or healthcare agency.4,5 Assessment methods are conceptualised, in the main, as accreditation type processes in that they are based on inventories of indicators or items. The standards applied typically cover a wide range of organisational issues from premises and equipment to delegation, communication and leadership.

Examining accreditation systems in primary care, Buetow and Wellingham noted that these measurement strategies have up to five functions: quality control, regulation, quality improvement, information giving, and marketing.5 Many of these functions are in conflict and the maximum tension is seen when measurements are used for quality control and regulation, on the one hand, and for quality improvement on the other. Commenting on the overt professional control of these processes, they argued for greater clarity of purpose, enhanced public confidence, and wider stakeholder involvement, particularly when the aim is quality control. Arguing for a separation of the aims, they noted that summative measures could potentially lead to resistance, window dressing, or gaming. They concluded that a quantitative snapshot could not provide an adequate picture of the performance of a complex task such as the delivery of health care. Overt summative approaches risk the loss of a formative developmental feedback approach that could inform quality improvement strategies.

When considered from a perspective of helping to develop and improve practices, these systems have disadvantages. Systems that judge against minimal standards can often fail to inspire movement towards improvement. Likewise, systems that judge against gold standards (based on leading edge practice) can sometimes discourage practices with substantial development needs embarking on quality improvement activities. The RCGP Quality Practice Award in the UK6 and equivalent methods in other countries (such as the Dutch, Australian or New Zealand Colleges of General Practice7–9) are prime examples of assessment systems that aim to reward excellence and/or minimum standards of care. Such schemes are attractive to practices that seek accreditation of minimal standards or to those that are able to achieve high standards with a manageable degree of work. For others—arguably the vast majority who occupy the middle ground—the standards gap between minimal and excellent is characterised by a void that does little to encourage engagement with standards driven quality improvement.

However, some of the biggest gains in quality improvement can be made by working with practices that are neither at the remedial end nor leading edge. This middle group has the potential to improve patient care by taking a few steps in the right direction. For such practices, revealing the gap between existing performance and the next step in the development process is more enabling than aiming for a gold standard. Few, if any, practice assessment methods have been designed to encompass the needs of the majority of practices who operate at this level. In short, there is a need for a practice assessment method that is formative in nature and that works for the practices that represent the majority of those which the management tier (currently primary care organisations in the UK) is concerned to improve.

Organisational measurement processes seem to be conceptually grounded on a “regulatory” concept rather than on a formative aim of providing feed-forward information to motivate developmental change.5 They seldom involve people from different roles in organisations in the process of assessment. It is known that assessments that respect historical restraints and incentives, are sensitive to different starting points, engage teams, identify developmental needs, and help to set priorities for future change are much more in tune with the internal workings and motivation of those who work in most organisations.10

We could not identify approaches that had rigorously set out to achieve assessment methods with these aims.4,5,11 While we accept that optimal quality of care will require disease specific as well as organisational indicators, we have deliberately focused here on the practice as a system. The realisation that many determinants of quality lie at the organisational level as well as the individual level12,13 is placing more emphasis on the assessment of organisational development.14–17 We therefore set out to devise a method that was sensitive to five issues:

  1. Organisations tend to develop along familiar lines. Not all countries have a tradition of generalist medical care but, where primary care has been supported, the typical starting point is that of a sole practitioner and a receptionist. Over time the organisational shape is moulded by societal expectations and payment schemes3,18 until these groupings develop to become amalgamations of doctors, nurses, healthcare assistants and others, eventually coordinated by professional managers.

  2. Primary care organisations, even when small in size, are complex and multidisciplinary groupings with differing perspectives on the levels of development achieved.

  3. The process of assessing the organisation should engage many people, partly as a defence against “gaming” but also, importantly, so that assessment forms a key step in forming an internal system wide motivation for future development.

  4. The results of such an assessment should be capable of being viewed as both criterion and norm reference displays. The main aim should not be to create comparative benchmark data (although this is useful at a higher level of aggregation), but to provide the individual organisation with a simple indicator of where it lies against a potential spread of other organisational maturity.

  5. It is simple, takes relatively little time, and can be done in the organisation with a minimum of facilitation time.

By addressing these five issues, we hoped to achieve a tool that was useful for both summative and formative purposes and that was going to be generic, validated, easy to use and, by using a group process, a defence against the tendency to “game” when assessments are undertaken.

With these principles in mind, this paper examines whether it is possible to design an assessment method that: (1) has high face validity; (2) is acceptable to practitioners and to external agents with an interest in practice development; (3) is feasible to use in a group setting; and (4) we can begin to examine its performance as a measure of organisational development in general practices by exploring its relationship with other practice characteristics.

METHODS

Instrument development, piloting, and validation

Building on the need to create an instrument that was primarily a formative assessment, the method was designed using the assumption that general practices develop along similar pathways, increasing their sophistication over time with respect to core organisational activities.19 In anticipation of the need to develop consensus about assessment areas and methods, care was taken to ensure adequate content validity.20 There were three distinct stages of instrument development leading to a pilot field test and a validation and feasibility study, as described below.

Stage 1 Prototype design and content specification

An outline “matrix” was designed (by GE and PM) in which relevant areas of general practice activities are described by a column, subdivided into a set of cells that describe increasing “development” in a common direction for primary care organisations. The following eight areas were described in this format:

  • clinical records;

  • audit of clinical performance;

  • access to clinical information;

  • use of guidelines;

  • prescribing monitoring;

  • practice based organisational meetings;

  • sharing information with patients; and

  • patient feedback systems.

This draft Maturity Matrix™ (1997) was circulated to 50 general practitioners who held educational and academic positions (continuing medical education tutors, vocational training scheme course organisers, and all general practitioners in academic positions) for consultation. Positive comments were received about the concept of an incremental approach to practice development, that it was easy to use, and would help practices to plan. Concerns were also raised. Clinicians were uncertain whether the assessment was to be used by practices themselves (formative assessment) or by external agents. They wanted to know whether or not the higher end achievement levels were based on consensus.

Stage 2 Prototype development

After further adaptation a second consultation process was conducted. A modified Maturity Matrix™ was circulated in 1998 by a Medical Audit Advisory Group to 35 clinicians who held educational or other professional leadership positions. They were asked to complete the Maturity Matrix™ for their practice and to answer a questionnaire. Seventeen clinicians responded (49%). All were positive about the usefulness of the Maturity Matrix™ as a means of assessing the organisational development of general practices, its relevance to development plans, and its ease of use as a potential external assessment of the organisation.

Stage 3 Pilot field test

Using a version based on the results of the above stages, the Maturity Matrix™ instrument was used for the assessment of a convenience sample of practices in 2001. A total of 32 organisations using an agreed multidisciplinary group assessment were visited by the one of the authors (GE) or by a research assistant. Revisions were made at the end of this stage and a completed version was published in 2002 (fig 1). We used this format also to provide a visual feedback format, allowing comparisons of aggregated results. The Maturity Matrix™ assessment process is described in box 1 and consists of a two step (individual and group level) profile determination, led by an external facilitator. An ordinal scale was assumed, with each cell achieved having dependence on the prior achievement of a preceding cell.

Figure 1

 Maturity Matrix™ 2002 showing one practice profile and all sample results. The solid black line indicates the assessment of one practice. The shaded area represents the aggregated practice achievement (by 10% increments).

Box 1 Using the Maturity Matrix™ for practice assessment22

The Maturity Matrix™ is designed to be a self-assessment tool for members of a primary medical care organisation for use in a group setting by an external facilitator. The assessment meeting should include doctors, nurses (practice and community based), practice manager, and other clerical staff. There is no limit on the number that can be present. Social workers, midwives, and other associate staff may also be included.

Without having had prior exposure to the Maturity Matrix™, each individual is given a blank profile and asked to circle cells equivalent to the level of organisational development achieved by their practice. At this stage individuals should not confer.

When individuals have completed their scoring, the facilitator conducts a discussion by taking each activity area (column) in turn. The aim is to examine the agreement among members of the practice over levels of development achieved for each area. The aim is not to provoke a debate between team members but to test for consensus. In this way, the group is guided to decide collectively which cell in each area best represents the level of development achieved by the practice. Agreement should be based on the lowest level at which consensus occurs. The agreed profile is collected for further analysis in an aggregated sample.

Stage 4 Feasibility testing and validation study

This stage took place between April 2002 and December 2002. Meetings were held with chief executives of the 22 primary care organisations in Wales (local health groups) at which their support for a feasibility project was obtained. Primary care facilitators or clinical governance staff were nominated to attend Maturity Matrix™ training workshops. A manual was finalised with the assistance of this facilitator group.21 Each facilitator was asked to recruit 6–10 practices to assess a range of practice size (single handed to large partnerships) using a three visit approach. The first visit was explanatory, the second consisted of a Maturity Matrix™ assessment, and the third provided an opportunity for feedback and a review of organisational development priorities. Baseline data were collected at each practice (list size, training status for postgraduate doctors in general practice, staff whole time equivalents, and the number of patients attracting deprivation payments). At completion the Maturity Matrix™ profile was sent anonymously for inclusion in a comparative dataset and a feedback report generated, comparing the organisation with practices of similar size and with an all Wales practice profile.

Each practice assessment participant completed an evaluation questionnaire, based on a 6-point agree/disagree scale. The questionnaire asked about the usefulness of the Maturity Matrix™ as a review process and whether it helped the planning of future developments. There was also space for free text comments about the contribution to practice development planning. Written and verbal feedback from facilitators was also collected during this feasibility study.

Ethical approval for the study was provided by the All Wales MREC committee.

ANALYSIS

Responses to the evaluation questionnaire were analysed (frequency and summary statistics) and the facilitator comments summarised. Feedback from the facilitators (field notes and comments in a review meeting) was categorised (content analysis). Data about the facilitators and their respective primary care organisations, and practice level data on patient list size, training status, practice staff profiles and the number of patients attracting deprivation payments were analysed (summary statistics). The Maturity Matrix™ was treated as a series of Guttman scales covering eight areas of organisational activity where greater levels of achievement were dependent on the attainment of previous steps. A global score was allocated to each practice profile, calculated by giving a count of 1 to each cell. The minimum score possible was 8; the maximum possible score was 49. Scores were transformed into percentages at a global level (across the eight areas) and at a column levels (for each of the eight areas) to reflect the variation in scaling used across the activity areas. Box 2 provides definitions of the psychometric terms used in the article.

Box 2 Statistical terminologies

Face validity indicates whether an instrument “appears” to either the users or designers to be assessing the correct qualities. It is essentially a subjective judgement.

Content validity is similarly a judgement by one or more “experts” as to whether the instrument samples the relevant or important “content” or “domains” within the concept to be measured. An explicit statement by an expert panel should be a minimum requirement for any instrument. However, to ensure that the instrument is measuring what is intended, methods that go beyond peer judgements are usually required.

Construct validity refers to the ability of the instrument to measure the “hypothetical construct” that is at the heart of what is being measured. Construct validity is then determined by designing experiments that explore the ability of the instrument to “measure” the construct in question. This is often done by applying the scale to different populations which are known to have differing amounts of the property to be assessed.

Principal components analysis is a technique used for clustering variables into a reduced number of components based on a relationship with other variables.

Varimax rotation is an analytical method that helps make the interpretation of clustering into components less subjective.

Cronbach alpha is a measure of the reliability of a composite rating scale made up of several items or variables.

Descriptive analysis preceded an analysis of construct validity. The Maturity Matrix™ assesses the construct of organisational development. This is an abstract construct and we can only tentatively assume its existence by observing practice performance in relation to each of the eight areas of activity described by the Maturity Matrix™. Validating an abstract construct such as organisational development depends on developing mini-theories tested by hypothesising the relationships between organisational development and other more concrete features of general practice such as size, training status and deprivation.22 Organisational development as described by the Maturity Matrix™ covers a range of diverse activities from record keeping to patient feedback. It was therefore felt inappropriate to use a global Maturity Matrix™ score to test construct validity. We planned instead to produce a correlation matrix and, if appropriate, to go on to conduct a principal component analysis (PCA)—a form of factor analysis (see box 2).

The purpose of the PCA was to explore whether the eight areas of activity could be grouped into a reduced number of components. If appropriate, exploratory PCA (oblique rotation) would be conducted with an eigen value setting of 1.1, as this level is regarded as more discriminatory.23 PCA would be used because the ordinal ratings could be assumed not seriously to distort the underlying metric scaling. Oblimin rotation would be used to allow the components to be correlated.24 This determined whether the eight areas of activity could be clustered into a reduced number of components for the purpose of hypothesis testing. On the basis of component loadings, hypothesis testing would be conducted to test the relationships between practice characteristics and organisational development as measured by the Maturity Matrix™. Although aware of limited evidence base for our proposals,25 we hypothesised that there would be:

  • a difference between training status and organisational development such that training practices had higher scores than non-training practices;

  • a negative relationship between deprivation and organisational development; and

  • a positive (if weak) relationship between list size and organisational development.

The association with training status is based on the fact that such practices are inspected every 3 years to ensure that they meet the criteria laid down in order to train doctors in general practice. The second hypothesis is based on the recognition that need and demand are known to be higher in areas of deprivation. Despite the “deprivation payment” uplift for practices, they are still likely to find it hardest to develop their systems. The third hypothesis is based on the finding that no one size of practice has a monopoly on the delivery of quality25 but, to some extent, the economies of scale that are possible in larger practices may enable practice development to occur more easily.

Two tailed Mann-Whitney tests were used for categorical data (training status) and correlations were examined using the non-parametric Spearman’s rho test for continuous data (list size and deprivation).

RESULTS

Facilitators

Nineteen of the 22 primary care organisations in Wales attended an initial meeting. During the set up of the feasibility study nine primary care organisations continued their involvement. Sixteen facilitators were trained to conduct the Maturity Matrix™ practice assessments, 13 of whom were employees of the primary care organisation and three were general practitioners; of those employed by the primary care organisation, six had responsibility for practice development and seven had responsibility for clinical governance.

Practices

The facilitators recruited 55 practices to participate from nine primary care organisation areas (table 1). Practice list size was normally distributed with a mean (SD) list size of 6018 (2735) patients. Thirteen practices had lists below 4000 and 24 had lists above 6001. The majority of practices (32/55) had less than 10% of their patient population attracting deprivation payments, and four had more than 70% of patients on their list qualifying for payments. One practice declined to release data regarding deprivation payments. Sixteen of the practices were postgraduate training practices; seven were single handed practices and one had nine partners. The practice personnel whole time equivalent (WTE) averages were as follows: 3.3 partners, 1.6 nurses, 0.9 managers, and 5.5 administrative staff. Overall there was significant variation in the practices in many characteristics.

Table 1

 Global maturity score, % deprivation*, and mean list size by local health group area

Evaluation questionnaires

A total of 390 individual evaluation questionnaires were collected from the 55 practice assessments. Using a 6-point scale (strongly disagree to strongly agree), 96.7% agreed that the Maturity Matrix™ was a useful method to review the organisation (10% strongly agreed, 79% agreed, 7.7% mildly agreed) and 1.5% disagreed. When asked if the review was helpful for planning purposes 95.1% agreed (10.5% strongly agreed, 73.3% agreed and 11.3% mildly agreed) and 1.5% disagreed.

The free text comments were similarly positive and could be grouped under the three broad headings of practice communication, organisational development, and measurement. The group based assessment provided an opportunity for all the staff to collaborate on an appraisal of the organisation in which they worked. For many, this had been their first opportunity for a multidisciplinary perspective on their workplace. They greatly appreciated that time had been allocated for talking to each other and reaching a consensus about the organisational structure and to reflect on future developmental priorities. Many participants noted that conceptualising the practice along a spectrum of development was useful. The assessment provided them with a sense of comparison against theoretical starting points, it gave the organisation a baseline against which to measure future progress, and revealed areas of organisational strengths and weakness. One of the most important findings was that the Maturity Matrix™ provided targets for future development by highlighting areas that had been neglected by other competing recent developments. The participants appreciated that the measurement had involved them in the process—that is, that it was not an external assessment, that it was not threatening, and that many had enjoyed the process. A few critical comments were received—that the group assessment in some practices had been too large, that it took too much time, and that some of the Maturity Matrix™ items were difficult to define. A critique of the scale is undertaken below.

Facilitator feedback

Facilitators provided feedback after conducting the Maturity Matrix™ sessions. They felt it had improved their relationships with the general practices. Some items of the tool required further definition—for example, the concept of inclusion in a practice “team” and the prescribing dimension. The concept of the Guttman (incremental ordinal) scaling caused problems in some activity areas. Some practices did not agree with the stated achievement sequence in a few activity areas. At the end of the data collection period the facilitators met and agreed that three additional organisational dimensions should be added to an updated version—namely, risk management strategies, continuing professional development polices, and human resource management procedures.

Practice and all sample profiles

Figure 1 outlines the profile of one practice against a shaded backdrop which represents the amalgamated performance of all other practices in the sample. For example, with respect to the clinical records area, it can be seen that the practice had developed considerably in this area and most of its clinical encounters were coded electronically by clinicians in a searchable format. The shaded area suggests that most practices in the sample (approximately 50%) had also achieved this degree of development, but only a few had achieved a state where all clinical contact was kept in a searchable format. It has typically taken practices 8–10 years to develop their organisational arrangements for clinical record keeping from written records only to paperless. The investment by primary care organisations and their predecessors in information technology and prescribing may explain the relatively high levels of organisational development for “clinical record keeping” and “clinician access to clinical information”. Conversely, the development of “patient feedback” and “learning systems” is, by comparison, in its infancy and this is reflected in the lesser degree of organisational development achieved in this area. These results indicate high face validity, although we cannot claim formal content validity.

Global Maturity Matrix™ scores

The distribution of global scores in shown in fig 2. The minimum score achieved by the sample was 25 and the maximum was 42 with a mean (SD) score of 32.78 (3.99). Data for the global scores across primary care organisation areas and the mean practice list sizes and number of patients attracting deprivation payments are shown in table 1.

Figure 2

 Distribution of global Maturity Matrix™ scores (%) across the practices (n = 55).

Exploratory factor analysis (principal components analysis)

On the basis of a correlation matrix, three components with an eigen value of more than 1.1 were extracted (table 2):

Table 2

 Component loadings (based on pattern matrix)*

  • Component 1 (Information management): consisted of two areas of activity—clinical records and clinician access to clinical information. Both areas describe the evolution of processes for storing and accessing information, one about patients and one about clinical evidence.

  • Component 2 (Communication): consisted of three areas of activity—organisational meetings, sharing information with patients, and patient feedback systems.

  • Component 3 (Quality improvement): consisted of three areas of activity—audit of clinical performance, use of guidelines, and prescribing.

Scores were calculated by summing the points achieved in those activity areas that loaded onto the relevant component (transformed to percentage of the total possible dimension scores).

Hypothesis testing

We had posed the hypothesis that training practices would have higher Maturity Matrix™ scores. This was confirmed by the finding that training practices had significantly higher scores than non-training practices with respect to the Information management component (mean rank score 36.5 for training practices and 24.5 for non-training practices; p<0.009, Mann-Whitney). No significant differences were found between training and non-training practices in the Quality improvement and Communication components.

The hypothesis that there was a negative relationship between deprivation and organisational development was confirmed with practices with a greater number of patients attracting deprivation payments having significantly lower scores on the Information management component (Spearman’s rho correlation coefficient −0.037, p<0.006). No significant differences in deprivation were found between practices with respect to the Quality improvement and Communication components, nor were there significant differences in any of the components with regard to the hypothesis that a weak positive relationship exists between increasing list size and organisational development.

In summary, evaluations by participants and facilitators were very positive, indicating a high degree of acceptance, feasibility and enjoyment of the group based assessment method. The implicit lack of precision in the method was useful because the assessment was perceived as less threatening. The individual Maturity Matrix™ profiles of practices are visual representations of their achieved state of organisational development. They typically show “spiky” patterns indicating varying progress in activity areas which represent their history, investment decisions, and environmental setting (fig 1). Practices found that the process of agreeing these profiles was educational and were interested in comparisons with other practices who had similar characteristics (size, deprivation, training status) using the visual feedback format.

Principal components analysis revealed that the Maturity Matrix™ assesses practices in three components which we have labelled Information management, Quality improvement and Communication. Significant differences were found between training and non-training status practices with respect to Information management. A significant negative relationship was found between an index of deprivation and Information management. No other significant relationships or differences were found.

DISCUSSION

Principal findings

This practice based self-evaluation method was found to be both useful and enjoyable by participating practices. The objective of developing an instrument with high face validity, acceptability to practices, and feasibility in group assessment contexts was achieved. The approach of requiring a two step team based assessment process using external facilitators to reduce the possibility of “gaming” (that is, the extent to which predetermined viewpoints by those in powerful positions can influence the assessment process) was found to be acceptable by the practices. Other organisational assessment methods (typically based in professional bodies) are known to take considerable time, commitment, and expertise to complete. The primary care organisations found the process to be a valid and feasible assessment of practices and led to improved dialogue and interaction with local organisations. A total of 55 practices in nine primary health care organisations agreed to share data and allow their practice profiles to be aggregated so that comparative data could be used at feedback visits.

With respect to construct validity, there was partial support for the hypothesis that training and non-training status varied with respect to degree and pattern of organisational development (Information management). Deprivation also had an influence, again with respect to Information management. Organisational size appeared not to make a significant impact on degree of organisational development with respect to any of the three components.

For those areas where the results were not significant there are two possible explanations. Firstly, while the Maturity Matrix™ is an accurate assessment, our hypothesis about, for example, practice size is inaccurate. Secondly, while our hypothesis is correct, the Maturity Matrix™ is not capable of discriminating between the degree of organisational development achieved by practices of different sizes. We should acknowledge that the use of the Guttman scale is based on the assumption that the majority of primary care organisations travel down the column in similar ways. We do not assume equal value for each step in the scale nor do we propose a higher level “construct” of maturity for achieving higher scores. High scores simply equate to higher levels of organisational development. This is why we used PCA to explore whether we could identify underlying core constructs. Clearly, further construct and content validity work is necessary to examine these issues in greater depth as the instrument is developed.

PCA revealed three components into which the eight areas of organisational development could be clustered—Communication, Information management, and Quality improvement. Because this is a formative instrument designed to enable incremental improvements, it is important that the eight identified areas of activity remain as distinctive scales for the purpose of assessment, feedback, and development work with the practice. The value of the three components, however, is that they can be used to continue the work on construct and criterion related validity. Communication, information management, and quality improvement are areas that are also typically assessed by other organisational assessments and the basis for future testing for criterion related validity has been laid. With respect to construct validity, the presence of the three components means that relationships with other aspects of general practice such as team climate, organisational culture, workload, and job stress can be explored.

Strengths of study

General practitioners designed the assessment process for use in their own workplace using an iterative developmental pathway coupled with the imperative that it had to be easy to use. Face validity is therefore high. The exploratory psychometric analysis has revealed the possibility of confirming that the assessment has potential construct validity. The added strengths are the use of a trained external facilitator linked to an NHS primary care organisation to undertake the assessment process in order to increase the reliability of the assessment and to make a link between formative organisation assessment and NHS management in primary care.

Weaknesses

The facilitators all had initial training but we did not have the resources to observe the conduct of the assessment sessions or to conduct any parallel reliability studies such as the comparison of the Maturity Matrix™ profiles with other measures of practice performance. It is possible that the facilitators had differing interpretations of the Maturity Matrix™ ratings, or that group interactions led to an unreliable assessment due to the effect of “multiple audiences” or “group think”. Further training, direct observation by a calibrator facilitator, video review of group assessments, plus test-retest assessments could potentially improve the reliability of the assessments, but the accuracy of informal consensus techniques will always be limited. It also became evident that the Maturity Matrix™ profile had limitations that had not been identified at the piloting stage and that the instrument requires further development to confirm that, although there is high face validity, formal content validity has not been demonstrated.

Other relevant literature

We have not identified many other formative tools of this nature. The UK RCGP Quality Team Development (QTD) scheme has similarities.26 QTD is a formative continuous quality improvement programme based on team assessment, patient survey, and multidisciplinary peer review visit. However, it requires more resources and is a more complex undertaking for practices, although it is reportedly well received by participants.27 It differs from the Maturity Matrix™ in that it specifically involves external peer review of the practice as well as self-assessment. We view the Maturity Matrix™ as a potential initial assessment—a framework for priority setting and planning and as a tool to document progress along an organisational development pathway.

Implications

Why design a formative approach to practice assessment when the current trends are towards developing accreditation systems? Like Buetow and Wellingham,5 we contend that it is important to separate out the task of quality improvement from organisational accreditation. It is precisely because of the emphasis on summative measures that a tool such as the Maturity Matrix™ is needed so that, although comparative benchmarking is possible, the overall goal is quality improvement. The Maturity Matrix™ is respectful of organisational starting points; it is useful for practices across the development spectrum and there is no bar to its use alongside accreditation systems. Perhaps for these reasons the latest version of the Maturity Matrix™ has been translated for use in other European countries. The value of “bottom up” approaches to quality improvement, particularly in healthcare systems with an emphasis on central managed approaches, has been adopted by governments.10

A first principle of education is to start at the point of existing competence. The same applies to quality improvement at the organisational level. In addition to being sensitive to existing characteristics, undertaking the Maturity Matrix™ group assessment process encourages the concept of double loop learning28—the organisation “learns how to learn” so that the concepts of change management become second nature and part of the routine of practice activity.

While the assessment method has high validity and is well accepted in the field, it is also recognised that the 2002 version of the Maturity Matrix™ needs to change in terms of scaling and the activity areas considered. The use of information technology is rapidly changing the way organisations adapt by requiring the use of common patient datasets using multiple access sites and embedded guideline reminders and on screen protocols.29 There is also emphasis on teamwork, delegation of clinical tasks, and role substitution. In addition, there is an increasing emphasis on patient involvement in the design and evaluation of care. These developments need to be reflected in the design of a practice assessment tool, especially in one that aims to continue to motivate quality improvement.

Key messages

  • Assessment of organisational aspects of general practice is high on policy agendas.

  • The Maturity Matrix™ was developed to assess the degree of organisational development in primary medical care organisations.

  • Assessment in 55 general practices found it to be a useful tool with high face validity.

  • There was some support for the hypothesis that training status affects the degree and pattern of organisational development.

  • The size of the practice had no effect on organisational development.

Acknowledgments

The Maturity Matrix™ Development Group has included members of the Clinical Effectiveness Support Unit, Welsh Assembly Government, and the Capricorn Primary Care Research Network. Based on this work, an updated version (Maturity Matrix™ 2003) has been developed which is covered by trademark arrangements and which is commercially available. The authors would also like to acknowledge the efforts made by the facilitators based in the primary care organisations on which this project has depended.

REFERENCES

Footnotes

  • Funding: Melody Rhydderch holds an NHS Research and Development National Primary Care Researcher Development Award and would like to thank Professor Yvonne Carter and Professor Cliff Bailey at the National Co-ordinating Centre For Research Capacity Development for their encouragement and support.

  • Conflict of interest: none