Background Despite the widespread use of accreditation in many countries, and prevailing beliefs that accreditation is associated with variables contributing to clinical care and organisational outcomes, little systematic research has been conducted to examine its validity as a predictor of healthcare performance.
Objective To determine whether accreditation performance is associated with self-reported clinical performance and independent ratings of four aspects of organisational performance.
Design Independent blinded assessment of these variables in a random, stratified sample of health service organisations.
Settings Acute care: large, medium and small health-service organisations in Australia.
Study participants Nineteen health service organisations employing 16 448 staff treating 321 289 inpatients and 1 971 087 non-inpatient services annually, representing approximately 5% of the Australian acute care health system.
Main measures Correlations of accreditation performance with organisational culture, organisational climate, consumer involvement, leadership and clinical performance.
Results Accreditation performance was significantly positively correlated with organisational culture (rho=0.618, p=0.005) and leadership (rho=0.616, p=0.005). There was a trend between accreditation and clinical performance (rho=0.450, p=0.080). Accreditation was unrelated to organisational climate (rho=0.378, p=0.110) and consumer involvement (rho=0.215, p=0.377).
Conclusions Accreditation results predict leadership behaviours and cultural characteristics of healthcare organisations but not organisational climate or consumer participation, and a positive trend between accreditation and clinical performance is noted.
- organisational culture
- organisational climate
- consumer involvement
- clinical performance
- clinical indicators
Statistics from Altmetric.com
- organisational culture
- organisational climate
- consumer involvement
- clinical performance
- clinical indicators
Accreditation: the certification of a programme, service, organisation, institution or agency by an authorised external body in accordance with predetermined criteria, usually expressed as standards, typically measuring structures and processes.1 2
Clinical performance: in this study, the proportion of clinical indicators that were better than the national average for those clinical indicators (an index of relative national clinical performance).
Leadership: the formal and informal ways in which influence, power and negotiation are used to shape behaviour and attitudes; the relationship of those leading to their followers. 6–8
Organisational climate: the broader institutional environment within which culture and subcultures operate.9–11
Organisational context: environmental variables which can include organisational size, the scope of the organisation's operations, the policy settings, the economic and financial circumstances or constraints and the strategy which the organisation is pursuing.12 13
Organisational culture: normative shared values, beliefs, practices and behaviours manifesting in organisations and groups, and emerging from loosely and tightly coupled social relationships.10 14 Manifestations include staff well-being, communication, teamwork, decision-making and the standard of care provided and quality and safety focus of staff.
Standards and criteria: standards are agreed, and specified yardsticks or goals used as a reference point against which performance can be assessed; criteria are detailed specifications of a standard.15
While there are many models, tools and approaches designed to improve healthcare quality and patient safety,16–22 convincing evidence of their effectiveness is sparse. One under-researched but ubiquitous strategy is accreditation.23–32 The purpose of the Australian Network for the Evaluation of Accreditation and Standards in Healthcare is to study the validity and effects of accreditation. A detailed protocol for investigating accreditation involving this Network has been published previously.1 This paper reports on study 1 of four studies in the Network's research programme.
Accreditation involves the assessment of organisational and clinical performance against predetermined standards usually by multiple means such as self-appraisal, peer review interviews, scrutiny of documentation, checking of equipment and weighing of key or representative clinical and organisational data. Accreditation programmes in countries such as the USA, Canada and Australia follow this model. The MARQuIS study of hospitals in European countries found considerable levels of support for and participation in accreditation, and opportunities for using external and internal assessment strategies.33 34
Accreditation assessments of this type differ from checklist-style, less wide-ranging models. The comprehensive approaches are commonly conducted at both organisational (eg, hospital, general practice, aged care) and service (eg, laboratory, ward, clinical unit) levels. The intention is to certify that organisations and their constituent services meet current designated standards. Improvement gradients are embedded in the process as standards are revised, and raised, over time.15 Accredited organisations and services receive public recognition of their status. In most accreditation models, organisations can be accredited, or be granted time to improve following remedial recommendations, or if performance falls below stipulated standards, they can lose their accreditation status. Accreditation processes are therefore designed to ensure both compliance and improvement by stimulating positive and longitudinal change in organisational and clinical practices. Through these ends, the goal is for accreditation to contribute to the production of high-quality and safe care for the benefits of consumers.
The evidence of the value of accreditation is indeterminate.35 Although there are studies suggesting that accreditation promotes service change,25 36–38 organisational change39 40 and professional development,41–43 it is equivocal whether quality of care or patient outcomes show improvement which can be attributed to accreditation.23 31 44–46 A randomised controlled trial of a facilitated quality improvement intervention, part of an accreditation process in Dutch general practices, showed that practices in the intervention group were more likely to have started and completed a greater number of quality improvement projects than those in the non-intervention group.47 However, no significant association was found in a North American study examining Joint Commission on Accreditation of Healthcare Organisation's accreditation scores and the Agency for Healthcare Research and Quality's Inpatient Quality Indicators and Patient Safety Indicators.31 The limited work examining the links between consumers' views or patient satisfaction and accreditation44 48–50 has found no clear relationships.
The healthcare context, that is the broad environment and situational organisational variables, is important in facilitating health service changes,29 51 with factors such as communication, case complexity, work load, education and information systems being seen as enablers or barriers.52 However, attributing change to contextual factors in association with strategies such as accreditation is problematic. Nevertheless, while commentators differ with respect to the weight that should be placed on contextual variables, leadership, organisational culture, organisational climate and consumer involvement in care processes are frequently cited.53
Achieving accreditation is typically regarded as a predictor of clinical care and organisational effectiveness by funders, institutions, patients and the public. Accreditation leads to confidence in the quality of care provided by an organisation, giving high levels of assurance about processes, structures and outcomes of care, following the classic distinction of Donabedian.54 The current research investigates whether accreditation ratings reflect these qualities.
Aims and hypotheses
We aimed to determine whether results of accreditation are associated with independent ratings of clinical and organisational performance, testing two hypotheses. We first hypothesised that accreditation performance would be positively associated with clinical performance. This is a measure of the extent to which accreditation predicts quality of care. We second hypothesised that accreditation would be positively associated with blind, independently assessed measures of organisational culture, organisational climate, consumer involvement and leadership, thereby measuring how accreditation performance relates to contextual factors which can facilitate continuous clinical and organisational improvement.
Design and sample
We identified health service organisations participating in the accreditation programme of the Australian Council on Healthcare Standards (ACHS), the largest Australasian health services accreditation and standards provider. ACHS's 1050 member organisations account for 94% of beds and 76% of acute health services in Australia.1 ACHS's accreditation model, known as the Evaluation and Quality Improvement Program, is shown in figure 1. It runs over 4 years in a cycle of activity involving an organisational-wide self assessment with support from ACHS (year 1), an organisational-wide survey on site and the development of a quality action plan following feedback and recommendations (year 2), self assessment with support (year 3) and an on-site follow-up visit by surveyors known as the periodic review (year 4). The organisational-wide survey and periodic review are reviews undertaken by external peer surveyors who rate the organisation's performance against the standards and criteria. ACHS member organisations can be designated as accredited, be given time to improve or lose their accreditation status.
As there are no control groups available to compare against non-accredited health services because almost all organisations participate in accreditation processes, we randomly selected study sites drawn from ACHS's membership against a sampling frame to ensure representation in terms of organisational size (small, medium, large), sector (Australia's health system is two-thirds public, one-third private), geographic location (metropolitan, regional, rural, remote) and jurisdiction (state and territory).
Nineteen accredited healthcare organisations with 3910 beds, employing 16 448 staff, treating 321 289 inpatients and providing 1 971 087 ambulatory services annually, were included in the study (figure 2). Data were collected according to our study procedures (figure 3).
Clinical and organisational performance are difficult to measure, and often there are no stipulated definitions of common terms. We reviewed the literature and adopted the definitions provided. Accreditation performance was based on ACHS survey teams' ratings of organisational performance against 43 criteria. Measures of quality of care were based on submitted ACHS clinical indicators routinely gathered and reported at six monthly intervals as tools to stimulate improvement in the quality of care.3 55 These data were used to assess the relative clinical performance of the accredited organisations in this study. Comparison of the organisation's rates with the national rates for each indicator provides a national benchmark for the organisation's performance. Although the type and number of indicators reported by individual organisations differ, each provides a range of indicators generally reflective of the main services provided by that health service organisation, including a combination of condition-specific as well as organisational-wide indicators.
Organisational contextual measures were assessed via fieldwork assessments. Teams of researchers, blinded to the accreditation performance of the organisation, conducted ethnographic and interview studies in each of the sampled organisations.
For such a large-scale study, we established a central coordinating group and organised four independent teams, blinded from each other to avoid cross-study contamination, to gather, analyse and review data. The central coordinating group identified personnel for the four independent teams and organised the distribution of data and collection of results. The accreditation survey team obtained and summarised data from the latest accreditation surveys of 19 participant health service organisations.
The clinical indicator team analysed routinely gathered clinical indicator data for the period 2001/2006 in 16 of the participant organisations. Three smaller organisations which were relatively new to accreditation did not submit data in the study period. The clinical indicator performance of the study organisations was compared against the national average performance for each indicator collected by calculating the observed and expected numerator for each CI. The observed and expected were summed over the 5 years, and if the observed was better than the expected, it scored a ‘1,’ and if worse a ‘0.’ The study organisations were subsequently ranked according to the proportion of its clinical indicators that were better than the national average.
The organisational assessment team arranged prefieldwork meetings with each of the 19 participant organisations and, in 2006 and 2007, conducted ethnographic observations, semistructured interviews and focus groups with staff according to predetermined assessment indicators for culture, climate, leadership and consumer involvement. The issues examined during interviews and focus groups and for which evidence was sought during observational sessions are shown below (see table 1, Organisational performance measures). An average of 45 h of fieldwork observations, and on average 8.9 interviews (range 3–15) and 9.2 focus-group sessions (range 3–18) with on average five participants per group were conducted per site. A total of 989 staff were interviewed or enrolled in focus groups. Organisational assessment team field notes were transcribed and a summary returned to participant organisations. This was to provide feedback and was part of the reciprocity of the research process.56
A separate statistical analysis team reviewed the data sets. They were subjected to descriptive and inferential non-parametric statistical procedures as described below.
Data structure and verification processes
Accreditation data comprised the assessments of accreditation surveyors who reported for each organisation on five-point scales, one for each of the 43 criteria covering the continuum of care, leadership and management, human resources management, information management, safe practice and environment and improving performance (the primary scores). They were then allocated a rating as to whether each score on the 1–5 scale was in the high or low range (the secondary scores). Both the primary and secondary scores were summed, and organisations ranked from highest to lowest on accreditation performance on the basis of these summed scores. For example, a healthcare organisation attaining moderate achievement on a criterion received a primary score of 3. If the performance against that criterion was better than average, they received a secondary score of 0.4.
Four separate, blinded expert panels, each with three panelists, reviewed the organisational assessment data. These data, comprising a set of organisational and interview field notes for each participant organisation, were forwarded to the four panels, one each for the assessment of organisational culture, organisational climate, leadership and consumer involvement (figure 3). Panel members were also blind to the organisations' accreditation and clinical performance. Panelists followed the RAND-UCLA phased method for analysing social data.57 Each panel member individually rated the sampled organisations on their variable from highest to lowest. Next, members of each panel met and reconciled individual rating differences, creating a compositae ranking. Panels then forwarded their group ranking schedules to the central coordination group for analysis.
The measure for determining an organisation's clinical performance was the proportion (ie, the percentage) of clinical indicators that were better than the national average for those clinical indicators (an index of each sampled organisation's relative national clinical performance). This index was used to rank the organisations' quality of care.
Spearman rank order correlations (rho) were calculated between accreditation performance and clinical performance scores and the four other organisational variables. Rank order correlations were calculated between the organisational variables to determine their relationships with each other. The Kendall coefficient of concordance (W) was computed to examine whether there was a significant relationship overall between the set of ratings of the five variables. The association of demographic variables (organisational size, health sector and geographic location) with accreditation ratings was investigated using the Mann–Whitney U test and Kruskal–Wallis one-way analysis of variance. Probability levels were set at <0.05; however, as is typical in organisational studies, in view of the organisational sample sizes, trends (p<0.10) were noted.
Enrolled organisations' characteristics are presented in table 2. When their characteristics were compared with available national data,58 the enrolled organisations were found to represent 5.1% of beds and 4.5% of Australia's patient separations.
Accreditation performance was assessed against the listed criteria (table 3) and ratings scales (table 4). Organisational performance was assessed against the identified measures which guided the semistructured interviews and focus-group questions (table 1).
A positive trend was observed between accreditation ratings and the index of relative national clinical performance (rho=0.450, p=0.080). A positive correlation was found between accreditation performance and both organisational culture (r=0.618, p=0.005) and leadership (r=0.616, p=0.005) (table 5). Organisational climate (r=0.378, p=0.110) and consumer involvement (r=0.215, p=0.377) were not significantly associated with accreditation ratings.
Some organisational variables were significantly related to each other, specifically organisational culture with leadership and organisational climate, and clinical performance with leadership. Consumer involvement was not associated with any organisational characteristic. The Kendall coefficient of concordance (W=0.043, χ2=2.733, df 4, p=0.604) indicated that the association among the set of the five clinical and organisational characteristics overall was not significant. A Mann–Whitney U test comparing the accreditation ratings of organisations in the public and private sectors revealed no significant difference (U=29.00, z=0.877, p=0.380). The Kruskal–Wallis analysis of variance comparing the accreditation ratings of large, medium and small organisations revealed no significant differences (χ2=0.202, df 2, p=0.904). Nor was there a significant difference between the accreditation ratings of organisations in different locations, viz metropolitan, regional and rural/remote (the Kruskal–Wallis test yielded χ2=0.521, df 2, p=0.771).
The results of the Network for the Evaluation of Accreditation and Standards in Healthcare study show that accreditation performance was significantly positively correlated with organisational culture and leadership. There was a positive trend between accreditation and clinical performance. Accreditation was unrelated to organisational climate and consumer involvement.
The finding that those organisations with a positive culture and demonstrated leadership perform better on accreditation than organisations lacking these characteristics indicates that accreditation performance is an accurate reflection of contextual organisational factors believed to be important in enabling or inhibiting quality of care and continuous clinical improvement.
This result represents a piece of the jigsaw in understanding the complex question of whether accreditation performance can accurately predict aspects of health service performance. Finding no relationship between accreditation and organisational climate suggests that this broad contextual variable is less sensitive than others for distinguishing between organisations.
We found weak evidence of an association between accreditation and clinical performance, measured via an index of relative national performance of clinical indicators. This confirms findings from other studies where the relationship between specific quality indicators and accreditation performance has been inconsistent or inconclusive.23 31 35 44–46
Most participant organisations had low levels of consumer participation, suggesting it is timely to review the ways health services can involve consumers more effectively, and how accreditation can reflect more clearly the needs of consumers. Future accreditation criteria should place more emphasis on this. Different approaches to consumer participation need to be trialled and evaluated. Little is known about how to involve consumers in ways that can impact positively on quality of care, although some work is being done to engage them in accreditation processes.50
Our work prompts a rethink of how accreditation contributes to clinical and organisational performance. Accreditation and the application of standards to health services via multiple assessment methods in some form will doubtless continue in the future, as no one has advocated a viable alternative to this model. It follows that strategies are required to reinforce the way accreditation might lead to improved quality of care, strengthen leadership, culture and climate, and how these factors in turn might mediate accreditation performance. Alternative approaches such as unannounced surveys and tracking patients with tracer methodologies are designed to help bring about improvements in accreditation processes and organisational and clinical systems, but are relatively untested.59 60
A limitation of our work is the power of the study. Although we examined a random sample of 19 organisations, representing substantial numbers of staff, inpatients and ambulatory care episodes, and made careful, triangulated assessments of their accreditation outcomes, organisational characteristics and measures of quality, a larger study involving more participant organisations would have enabled more detailed analyses examining the association between specific components of accreditation and the measures. One challenge is the extent of fieldwork required to make organisational assessments. Another is the complexities of using clinical indicators to measure the quality of care.61 62 A limitation in demonstrating a relationship between accreditation performance and clinical indicator performance is the current way in which clinical indicator data are used for accreditation purposes. The mix of indicators reported is user-determined rather than centrally prescribed, and there is variation in the number and nature of indicators reported by organisations. The indicators are primarily for use as internal quality improvement tools rather than for comparing the performance of organisations per se. This is why we created an index of each organisation's clinical performance and compared this against national clinical performance. While this represents progress in assessing quality of care, it is important to move towards objective, independent measurement of clearly defined clinical standards to underpin future work. A third challenge lies in defining and measuring multiple variables, particularly organisational and contextual variables.
Despite decades of accreditation practice and calls for research into accreditation,26 29 30 63 64 there was until recently little convincing evidence about whether and how accreditation predicts health service performance. This is the largest study to investigate and empiricise these relationships and to present a multimethod approach to tackle some of the research challenges presented. Work is needed to build on and extend this research as continued large-scale investments in accreditation processes warrant evidence of its effectiveness.
The study forms part of the research programme into accreditation led by investigators in the Centre for Clinical Governance Research, Australian Institute of Health Innovation, Faculty of Medicine at University of New South Wales, Sydney, Australia. For this research, the Centre has industry partners the Australian Council on Healthcare Standards, Ramsay Health Care, various consumer groups and the NEASH collaborators.
Funding The research was supported under Australian Research Council's Linkage Projects funding scheme (project number LP0560737).
Competing interests None.
Ethics approval Ethics approval was provided by the University of New South Wales' Human Research Ethics Committee (HREC), approval 05081.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.