Article Text
Abstract
Background Given rising costs and changing payment models, healthcare organisations are increasingly focused on value and efficiency. The goal of our study was to develop survey items to assess clinician and staff perspectives about the extent to which the organisational culture in hospitals and medical offices supports value and efficiency.
Methods Development began with a literature review and interviews with experts and clinicians and staff from hospitals and medical offices. We identified key areas of value and efficiency culture, drafted survey items and conducted cognitive testing. Using purposive sampling to select sites, the 36-item surveys were pilot tested in 47 hospitals and 96 medical offices. Psychometric analysis was conducted on data from 3951 hospital respondents (42% response) and 1458 medical office respondents (63% response).
Results Factor loadings, multilevel confirmatory factor analysis model fit and reliability estimates were acceptable for the 13 items grouped into 4 composite measures: Empowerment to Improve Efficiency (3 items), Efficiency and Waste Reduction (3 items), Patient Centeredness and Efficiency (3 items) and Management Support for Improving Efficiency and Reducing Waste (4 items). All composite measures were significantly intercorrelated and related to the four Overall Ratings of Healthcare Quality, indicating adequate conceptual convergence among the measures. Eight items assessing Experiences With Activities to Improve Efficiency were also included.
Conclusion We developed psychometrically sound survey items measuring value and efficiency culture. When added to the Agency for Healthcare Research and Quality Surveys on Patient Safety Culture, the item sets extend those surveys by assessing additional dimensions of organisational culture that affect care delivery. Healthcare organisations can use these item sets to assess how well their organisational culture supports value and efficiency and identify areas for improvement.
- surveys
- healthcare quality improvement
- hospital medicine
- ambulatory care
- safety culture
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information. All data relevant to the study and available are included in the article or uploaded as supplementary information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Background
Given rising healthcare costs around the world, governments, policy makers, payers and healthcare organisations are increasingly focused on value and efficiency in healthcare delivery. Value in healthcare has been defined as patient experiences and outcomes over cost.1 This definition includes patient experience in the equation, which is not explicit in all definitions of value.2 Efficiency is related to value and is a measure of the cost of care associated with a specified level of quality3 and includes avoiding waste of equipment, supplies, ideas and energy.4 Implicit in these definitions is the understanding that cost-cutting cannot be the end goal at the expense of outcomes or patient experiences. Rather, the goal is to improve quality of care and patient experiences while reducing waste, controlling costs and leveraging the innovative ideas and energy of the healthcare workforce in accomplishing this goal.
In accordance with these value and efficiency principles, the Medicare Access & CHIP Reauthorization Act of 2015 in the USA created the Quality Payment Program. This programme changed the way clinicians are rewarded by emphasising value over volume, streamlining quality programmes and providing bonus payments for participation in alternative payment models.5 In value-based healthcare delivery, payments to organisations and clinicians are based on patient health outcomes and value is derived from measuring health outcomes against the cost of delivering those outcomes.6 Accordingly, various alternative payment models involving patient-centred medical homes and accountable care organisations are being implemented in healthcare systems throughout the USA to provide incentives for patient-centred, high-quality, cost-efficient care.7
Waste can be defined as anything that does not add value or is unnecessary for patients, clinicians or staff—such as wasted time; wasted materials; extra steps in a process; rework and unnecessary tests, procedures, treatments or services. One of the Institute of Medicine’s (IOM) six aims for healthcare was that it should be efficient and avoid waste, as both components are needed to achieve value.4 Healthcare delivery that is focused on value and efficiency benefits patients, the workforce and healthcare organisations as well as payers. Healthcare organisations have therefore been motivated to apply Lean, Six Sigma and high-reliability concepts to reduce waste and improve value and efficiency while maintaining high quality care at a reasonable cost. When there is successful buy-in across the entire organisation to continuously practice, monitor and improve waste reduction at every level, ‘lean thinking’ becomes deeply embedded in the organisation’s culture.8 Focusing on value and efficiency requires a culture in which the principles of, and beliefs about, value and efficiency are supported, rewarded, expected, and accepted.
Organisational culture refers to the beliefs, values and norms shared by clinicians and staff within healthcare organisations, that influence their actions and behaviours.9 Because organisational culture is a critical component of the quality and safety of healthcare service delivery,10 healthcare organisations need to regularly examine culture.11 Although numerous metrics of efficiency in healthcare exist,12–14 there are few publicly available measures that assess the ways in which an organisation’s culture supports value and efficiency. We identified one survey that assesses clinicians’ perspectives on high-value care delivery, but pilot testing was limited to only residents and hospitalists at two hospitals; no testing was conducted in medical offices.15
Given limited measures of the culture of value and efficiency, our study goal was to develop psychometrically sound survey items that assess the extent to which the organisational culture in these settings places a priority on and promotes efficiency, waste reduction, patient-centredness and high-quality care at a reasonable cost. The Value and Efficiency Item Sets were designed as supplemental items that can be added to the end of the existing Agency for Healthcare Research and Quality (AHRQ) Surveys on Patient Safety CultureTM (SOPS®) Hospital and Medical Office Surveys.16 17 As supplemental items, the value and efficiency measures can extend the existing SOPS Hospital and Medical Office Surveys by assessing additional dimensions of organisational culture that are related to the safety and quality of care delivery.
Methods
All study procedures were approved by Westat’s Institutional Review Board (FWA 00005551). The study protocol was carried out in accordance with relevant guidelines and regulations.
Survey item development
We followed a systematic and iterative survey development process that involved reviewing existing literature related to healthcare value, efficiency, waste reduction, patient-centredness, leadership and organisational climate and culture. We also reviewed existing surveys in these areas. We conducted a thorough review, stopping once we reached concept saturation. We then identified common concepts and themes across these articles. In addition, we conducted semistructured background interviews with nine value and efficiency researchers and experts to identify key areas of focus. We also interviewed 13 clinicians and staff in hospitals and medical offices to ask how value and efficiency principles affect their work and the way patient care is delivered.
After synthesising data from these various sources, we identified key areas of value and efficiency culture. Our conceptual framework drew from concepts in the 2013 IOM report Best Care at Lower Cost 2 and from literature defining terms like value,1 efficiency3 4 and waste reduction.18 In refining our concepts, we omitted key areas that overlapped with content on the existing SOPS surveys (eg, teamwork, organisational learning, communication openness, coordination/information exchange) to avoid redundancy with the SOPS measures. Across our various sources, we found that the same key areas were very consistent in their applicability to both hospitals and medical offices. While we were initially open to the possibility that the item sets could be different, the resulting parallel nature of the Value and Efficiency Items for hospitals and medical offices emerged from our research process. We therefore drafted survey items to assess each of these key areas, making the survey items parallel for each setting, but varying item text as needed. To assess item comprehension and relevance, and ease of responding, we conducted two rounds of individual cognitive interviews with 16 hospital and 18 medical office clinicians and staff, including physicians, nurses, managers, technicians, nurse practitioners, medical assistants and clerks. Based on the cognitive interview results, we further refined and improved item wording.
In background interviews and cognitive interviews with clinicians and staff, we discovered that value was a somewhat abstract concept that some were not familiar with. However, we ultimately decided that it was important to retain the concept of value in the survey measures. Without the concept of value, the survey items would focus solely on waste reduction and efficiency, which could be interpreted as emphasising cost-cutting at the expense of quality of care or patient experience. We therefore included definitions of the terms waste, efficiency and value in the beginning of the surveys to help orient respondents and establish the focus of the survey questions (the definitions are shown in online supplemental files C and D).
Supplemental material
Supplemental material
Additional input on survey item development was provided by a 17-member Technical Expert Panel (TEP) that provided feedback at key points in the development process. The TEP helped identify areas of focus, provided suggestions for item wording and helped decide which items to retain or drop. The TEP included representatives from large healthcare systems in the USA, healthcare professional associations and research organisations in the USA, Sweden and the UK (see online supplemental file A for a list of TEP members).
Supplemental material
Measures
The pilot test item sets for hospitals and medical offices each included a total of 36 survey items. Twenty-two of the 36 items measured four key areas of organisational culture pertaining to value and efficiency. These key areas served as our four a priori composite measures, which were groups of two or more survey items that assess the same area of culture related to value and efficiency. The four a priori composite measures were: Empowerment to Improve Efficiency (five items), Efficiency and Waste Reduction (six items), Patient Centeredness and Efficiency (five items) and Management (phrased as ‘Owner/Managing Partner/Leadership’ in medical offices) Support for Improving Efficiency and Reducing Waste (six items). Response options used either 5-point agreement scales (Strongly disagree to Strongly agree) or frequency scales (Never to Always), including a Does not apply or Don’t know (DNA/DK) response option. These response options are also used on the SOPS surveys.
In addition, 10 of the 36 items asked about respondents’ Experience with Activities to Improve Efficiency—whether respondents had mapped a workflow process, served on a team or committee to make a work process more efficient or conducted other similar activities (Yes/No). These experience items were included to assess the extent to which clinicians and staff within healthcare organisations applied value and efficiency concepts into their everyday work activities. Respondents were also asked four Overall Ratings of Healthcare Quality to assess whether their site was (1) Patient centered, (2) Effective, (3) Timely and (4) Efficient (5-point rating scale—Poor to Excellent). As noted previously, brief definitions of waste, efficiency and value in healthcare were provided at the beginning of the surveys.
Pilot test
The 36-item Value and Efficiency Item Sets were pilot tested in 2014 in 47 hospitals and 96 medical offices in the USA. Because the Value and Efficiency Item Sets were long, and to maximise response rates, we administered them as stand-alone instruments in the pilot test rather than adding them to the end of the existing SOPS surveys. In addition, we conducted a large-scale pilot test to ensure that we had adequate respondent and site-level data for psychometric analysis. To facilitate the generalisability of our findings, we used purposive sampling to select sites to vary by geographic region, size, hospital teaching status and medical office specialty. Given the small number of staff within most medical offices, we needed more medical offices than hospitals to obtain enough data at the individual respondent level for analysis purposes. In pilot hospitals that were large, up to 200 clinicians and staff were selected using stratified random sampling based on hospital unit and staff position. In smaller hospitals with fewer than 200 clinicians and staff, and in all medical offices, a census was contacted to complete the survey items.
The hospital item set was administered with an email invitation to a web survey. The medical office item set was administered by either paper or web, but only one mode at any site. Sites received full remuneration ($1000 for hospitals/$400 for medical offices) if they obtained at least a 60% response rate, but less for lower response.
Analyses
Psychometric analyses on the hospital and medical office data included item analysis, site-level percent positive scores and correlations for all survey items. In addition, the following analyses were conducted on the items comprising the four a priori composite measures: internal consistency reliability, intraclass correlations (ICC(1)) and design effects and multilevel confirmatory factor analysis (MCFA). Each of the analyses is described in more detailed below. MCFA was conducted using Mplus V.8.5. All other analyses were conducted in SAS V.9.4.
Item analysis
We initially examined item frequencies to review response variability and identify items with high percentages of missing data or DNA/DK responses. Items with little response variability do not differentiate higher-scoring from lower-scoring sites and therefore would not be particularly useful. Any items with more than 90% of respondents responding positively (Strongly agree/Agree or Always/Most of the time) were flagged as having low variability and considered for dropping. Items with more than 30% missing or DNA/DK were also considered for dropping, as such items may not be relevant to a large proportion of respondents. However, we did not rely solely on items flagged during item analysis to determine which items to drop. We also examined results from other psychometric analyses (described next) and TEP feedback weighing the importance and relevance of item content.
Internal consistency reliability
Internal consistency reliability was assessed using Cronbach’s alpha to determine the extent to which respondents answered items within each of our four a priori composite measures in a similar way. The minimum criterion for acceptable Cronbach’s alpha reliability is 0.70.19
Intraclass correlations (ICC(1)) and design effects
Respondents were nested in hospitals or medical offices which can violate the basic statistical assumption of independence of responses. When the degree of non-independence is large, results from an individual-level confirmatory factor analysis (CFA) may be biased or incorrect. To determine if we needed to account for the clustered nature of the data by conducting a MCFA, we computed ICC(1) for each survey item in both the hospital and medical office data. ICCs help to determine if substantial variation exists between sites compared with variation within sites. ICCs greater than 0.05 (or 5%) typically indicate that group membership has an impact on responses of the individuals.20 Because ICCs can be affected by both the number of groups and group size, we also examined design effects which take into account within-site sample size. A design effect of 2 or greater suggests that group membership does have an impact on responses of the individuals.21 ICC(1)s greater than 0.05 and/or design effects equal to or greater than 2 indicate that MCFA is necessary to account for the grouping effects on individual responses.22
Multilevel confirmatory factor analysis (MCFA)
The purpose of confirmatory factor analysis is to determine how well a proposed factor structure fits the data. Because we developed survey items to measure specific areas of value and efficiency culture, our goal was to test the fit of the data to our four a priori composite measures. MCFA was conducted on the four a priori composite measures and their associated items, taking into account the nesting of individuals in hospitals and medical offices. All four a priori composite measures were tested in one model for the hospital survey and another model for the medical office survey. Acceptable factor loadings are above 0.40, indicating that the item’s relationship to the hypothesised composite measure is acceptable.23 The same criterion was used to evaluate both within and between site (eg, hospital or medical office) factor loadings. We also examined several fit indices for acceptability of model fit: χ² divided by df (χ²/df) (<5);24 the comparative fit index (CFI) (≥0.95);25 the standardised root mean square residual (SRMR) (<0.08)26 and the root mean square error of approximation (RMSEA) (<0.06).25 Whereas χ²/df, CFI and RMSEA were examined for the entire model regardless of the levels, SRMR was examined for the within-site and between-site levels (ie, within-site SRMR and between-site SRMR).
Per cent positive scores and correlations among composite measures
We calculated site-level per cent positive scores for each composite measure, Overall Ratings on Healthcare Quality, Experience with Activities item and the average score for Experience with Activities. Site-level per cent positive scores are the percentages of those within a site who answered positively for each item (% Strongly agree/Agree, Always/Most of the time, Yes or Excellent/Very good). These site-level per cent positive scores on the items within each of the four composite measures were equally weighted and averaged to compute site-level composite measure scores. The site-level average score for Experience with Activities was calculated using the same method. Item and composite measure per cent positive scores could range from 0 to 100. We also examined Spearman’s rank order correlations among the composite measures, rating items and the average score for Experience with Activities at the hospital and medical office levels. These measures should have moderate or moderately high intercorrelations if they are assessing related concepts.
Results
Across the 47 hospitals, 3951 responded out of 9375, for a 42% overall response rate (average of 84 respondents per hospital; range: 25 to 132 respondents). Across the 96 medical offices, 1458 responded out of 2321, for a 63% overall response rate (average of 15 respondents per site; range: 5–80 respondents). The characteristics of pilot study sites are shown in table 1 and the characteristics of respondents in table 2.
Item analysis
Online supplemental table 1 shows the average per cent positive and percentage of missing or DNA/DK responses for each item. One item had low variability (95% positive in hospitals; 92% in medical offices): ‘We are responsive to patient or family member concerns about the patient’s care’ (Patient Centeredness and Efficiency). One item had high missing/DNA/DK (52% in hospitals; 43% in medical offices): ‘We invite patients to serve on advisory panels or committees to help us improve the patient care experience’ (Patient Centeredness and Efficiency). One other item had high missing/DNA/DK in hospitals (30% in hospitals; 23% in medical offices): ‘We focus on eliminating unnecessary tests and procedures for patients’. The high levels of missingness for these items suggested that larger proportions of respondents were unable to respond about these issues.
Supplemental material
Initial internal consistency reliability analysis
For the four a priori composite measures, Cronbach’s alpha was greater than 0.70. In addition, dropping items within the composite measures would not result in an increase in reliability of the composite measures.
Technical Expert Panel (TEP) review and input
We presented the initial analysis results to the TEP to obtain their input on whether to retain or drop items. To shorten the survey, we also asked them to identify other items to drop based on content and relevance. After TEP feedback, of the original 36 pilot items, we dropped the same 11 survey items for both the Medical Office and Hospital Item Sets, leaving 25 items. Some items were dropped because they fell outside of the criteria (eg, high per cent positive, high missing/DNA/DK), but in other cases, the TEP advised that items be dropped or retained based on conceptual importance. In the Medical Office Item Set, however, we dropped the four Overall Ratings of Healthcare Quality because they are already included in the SOPS Medical Office Survey. Therefore, there were 25 items in the final Hospital Value and Efficiency Item Set and 21 items in the final Medical Office Value and Efficiency Item Set (see online supplemental material B for specific items dropped and reasons for dropping). We then ran the remaining psychometric analyses on the retained items.
Supplemental material
Intraclass correlations (ICC(1)) and design effects
In online supplemental table 2, the ICCs for the final hospital composite measure items ranged from 0.03 to 0.06 (average of 0.04), indicating that between 3% and 6% of the variance in individual responses to the items could be attributed to hospital membership. Design effects for the hospital data ranged from 3.37 to 4.96 (average of 4.19). The ICCs for the final medical office composite measure items ranged from 0.13 to 0.25 (average of 0.17), suggesting that between 13% and 25% of the variance in individual responses to the items could be accounted for by medical office membership. Design effects for the medical office data ranged from 2.25 and 3.42 (average of 2.83). In both datasets, the ICCs above 0.05 and/or design effects equal to or greater than 2.00 confirmed that site membership impacted the way individuals were responding to the survey. Therefore, we needed to take into account the multilevel nature of the data when examining the factor structure of the a priori composite measures.
Multilevel confirmatory factor analysis (MCFA)
Online supplemental table 2 displays standardised factor loadings for each final survey item on its respective composite measure in the hospital and medical office data. All between-site and within-site factor loadings for both the hospital and medical office data were statistically significant (p<0.05) with magnitudes above 0.40, indicating that all the items adequately loaded on their respective composite measures. Between-hospital factor loadings ranged from 0.69 to 1.00 and within-hospital factor loadings ranged from 0.64 to 0.89. Between-medical office factor loadings ranged from 0.66 to 1.00 and within-medical office factor loadings ranged from 0.65 to 0.83. All model fit indices, except for the χ²/df for the hospital data and between-site SRMR for the medical office data met criteria for acceptable model fit (table 3).
Final internal consistency reliability analysis
Table 4 shows that Cronbach’s alpha for all final composite measures exceeded the 0.70 criterion. None of the final survey items, if deleted, would increase the reliability of the composite measures.
Per cent positive scores and correlations among the composite measures
Online supplemental table 3 shows average per cent positive scores and SD at the hospital and medical office levels for the composite measures, rating items, Experience with Activities items and the average score for Experience with Activities. Hospitals scored highest on Management Support for Improving Efficiency and Reducing Waste (78% positive) and lowest on Empowerment to Improve Efficiency (64% positive), while medical offices scored highest on Efficiency and Waste Reduction (72% positive) and lowest on Patient Centeredness and Efficiency (54% positive). For the rating items, both hospitals and medical offices scored highest on the Overall rating for being Patient Centered—Is responsive to individual patient preferences, needs, and values (67% positive for hospitals; 64% for medical offices) and lowest on being Efficient—Ensures cost-effective care (avoids waste, overuse, and misuse of services) (52% positive for hospitals; 46% for medical offices).
In both hospitals and medical offices, the activity to improve efficiency that most respondents had done in the past 12 months was making a suggestion to management about improving an inefficient work process (64% positive for hospitals and 62% for medical offices). The activities that the fewest respondents had done were: shadowing/following patients to identify ways to improve their care experience (18% of respondents for hospitals and 15% for medical offices) and monitoring data to figure out how well an activity to improve efficiency was working (29% of respondents for hospitals and 17% for medical offices). The average score for Experience with Activities was 44% positive for hospitals and 33% positive for medical offices.
Table 5 shows the Spearman correlations among the composite measures, rating items and the average score for Experience with Activities. All four composite measures were significantly intercorrelated (p<0.05) in each dataset. The strongest correlation in the hospital data was between Management Support for Improving Efficiency and Reducing Waste and Patient Centeredness and Efficiency (rs =0.85). The strongest correlation in the medical office data was between Owner, Managing Partner, Leadership Support for Improving Efficiency and Empowerment to Improve Efficiency (rs =0.52). In addition, all four composite measures were significantly correlated (p<0.05) with the four Overall Ratings of Healthcare Quality in both the hospital (range=0.48 to 0.76) and medical office (range=0.22 to 0.60) settings. In addition, seven of the eight correlations between the average score for Experience with Activities and composite measures and rating items were statistically significant (p<0.05) in the medical office data (range=0.20 to 0.39), and all were statistically significant in the hospital data (range=0.35 to 0.63).
Discussion
Both the hospital and medical office item sets demonstrated good psychometric properties. Internal consistency reliability estimates were acceptable for the four composite measures in both item sets. Estimates for ICC(1)s and design effects indicated the need to account for the multilevel nature of the data when examining the factor structure in the hospital and medical office data. Results of the MCFA provided support for the reliability and construct validity of the four composite measures at both the site (ie, hospital and medical office) and individual levels. In addition, all composite measures were significantly, yet moderately, intercorrelated and also correlated with the Overall Ratings of Healthcare Quality in both item sets, indicating adequate conceptual convergence among these measures.
A strength of our study was that we conducted an extensive pilot test with a large number of hospitals and medical offices that were selected using purposive sampling to vary by region, size, hospital teaching status and medical office specialty. In addition, the characteristics of our study’s respondents in terms of the percentages of physicians, nurses and other clinical and non-clinical staff were typical of what hospitals obtain when they administer culture surveys such as the AHRQ SOPS Surveys.27 28 Therefore, our findings are based on a fairly representative set of hospital and medical office sites and respondents.
The final Hospital and Medical Office Value and Efficiency Item Sets both include 13 survey items grouped into 4 composite measures of organisational culture pertaining to value and efficiency: Empowerment to Improve Efficiency (3 items), Efficiency and Waste Reduction (3 items), Patient Centeredness and Efficiency (3 items) and Management Support for Improving Efficiency and Reducing Waste (4 items). In addition, there are eight questions about Experience With Activities to Improve Efficiency. Although the hospital and medical office versions are parallel, they vary in wording to be appropriate in each setting. The hospital item set also includes four Overall Ratings of Healthcare Quality that ask respondents about the extent to which their unit/work area is Patient Centered, Effective, Timely, and Efficient (these overall ratings are already included in the SOPS Medical Office Survey).
Using the value and efficiency item sets for improvement
Examining the relationships among the composite measures, Management Support for Improving Efficiency and Reducing Waste was most strongly related to the other composite measures. This finding is consistent with the literature that identifies the importance of leadership in influencing organisational culture.29 In the IOM report Best Care at Lower Cost, a leadership-instilled culture of learning was identified as an important characteristic of a continuously learning healthcare system.2 The report also emphasised the importance of incentives aligned for value to encourage continuous improvement, identify and reduce waste, and reward high-value care.
Understanding that leadership is essential, managers need to be aware that often their perspectives are quite different when compared with those of other staff. In a separate publication based on the same data collection as our study, the Medical Office Value and Efficiency Items were used to examine differences in culture perceptions by staff positions.30 Results showed that clinical staff had more positive value and efficiency culture perceptions than non-clinical staff, but among non-clinical staff, managers were more positive than non-managers. The study demonstrated the utility of the items in understanding the different perspectives of clinical, non-clinical and managerial staff when trying to build consensus and foster shared perceptions within an organisation.
On questions about Experience With Activities to Improve Efficiency, in both hospitals and medical offices, the majority of respondents made a suggestion to management about improving an inefficient work process. However, most staff had not conducted other important activities such as shadowing/following patients to identify ways to improve their care experience or monitoring data to figure out how well an activity to improve efficiency was working. Higher scores on all four composite measures were significantly related to higher average Experience With Activities, indicating that cultures focusing more on value and efficiency have greater numbers of clinicians and staff that apply these concepts in their everyday work activities.
It is clear from these results that much more needs to be done within healthcare organisations to ensure that activities focused on value and efficiency are supported and conducted. But what can healthcare organisations do to reduce waste and create a culture of value and efficiency while maintaining high-quality care? Leading healthcare systems in the USA that have focused on culture change to improve high-value healthcare, such as Virginia Mason Medical Center31 and Thedacare32 can serve as exemplars. At Virginia Mason, employees have attended an ‘Introduction to Lean’ course and participated in rapid process improvement activities in which teams analyse processes and propose, test and implement improvements.33 At ThedaCare, leaders have acknowledged that a culture of ‘lean’ requires new behaviours, designing processes that reduce wasted time, redesigning work processes that better enable staff to meet the needs of patients33 as well as recognising the need to build a culture of continuous improvement.32
In addition to the efforts of leading-edge organisations, groups of healthcare organisations are convening to share data and best practices. The High Value Healthcare Collaborative34 includes more than a dozen provider-based learning health systems committed to improving healthcare value by sharing data; working together to assess high-cost, high-variation health conditions and treatments and identifying and disseminating promising models of care. Another organisation, The Health Care Transformation Task Force, brings together patients, payers, providers and purchasers to align public and private-sector efforts to foster value by sharing data, developing best practices and toolkits for implementing value-based payment models and establishing leadership forums to address transformational challenges, including strategy and culture.35
Administering the Value and Efficiency Item Sets
While healthcare organisations may choose to administer the item sets as stand-alone surveys or only administer a subset of the items, we recommend that they be administered as supplements to the AHRQ SOPS surveys without modification. First, understanding how organisational culture supports value and efficiency is best done in the larger cultural context that supports patient safety, which is assessed in the SOPS surveys. Second, to ensure comparability of scores across healthcare organisations, the items should be administered in a standardised way. Third, using the item sets as supplements to the SOPS surveys limits the number of separate employee surveys that are conducted. Finally, we expect that healthcare organisations and researchers will find it useful to examine which aspects of culture support both patient safety and value and efficiency.
Limitations
Even though the pilot hospitals and medical offices were recruited to vary on several key characteristics, they were not randomly selected and thus are not truly representative of all US hospitals and medical offices. In addition, our multilevel analyses for hospitals and medical offices examined the individual level and site level, but we were unable to include the unit level within hospitals because accurate unit-level information was not captured. Although our study demonstrated that both item sets had good psychometric properties and construct validity, our only outcome measures were respondent-reported overall ratings of the patient-centredness, effectiveness, timeliness and efficiency of the hospitals and medical offices. The value of the survey measures would have been strengthened if we could have demonstrated correlations with external, non-survey measures of efficiency or value, but such additional measures were not available for our study. Future research should attempt to link the new survey measures with other indicators of value and efficiency as well as with patient experience surveys such as CAHPS (Consumer Assessment of Healthcare Providers or Systems) or the SOPS patient safety culture surveys, to further our understanding of how these measures are related.
Conclusion
Given limited existing measures, our study fills an important gap by providing psychometrically sound survey items that measure distinct and important aspects of value and efficiency culture. When used as supplements to the existing SOPS Hospital and Medical Office surveys, the item sets can extend those surveys by assessing additional dimensions of organisational culture that affect the safety and quality of care delivery. Healthcare organisations can use these item sets to assess how well their organisational culture supports value and efficiency and identify areas for improvement based on input from clinicians and staff throughout the organisation.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information. All data relevant to the study and available are included in the article or uploaded as supplementary information.
Ethics statements
Patient consent for publication
Acknowledgments
The authors are grateful for the input from the 17 experts and researchers on value and efficiency in hospitals and medical offices who served on the Technical Expert Panel. In addition, we thank the hospitals and healthcare systems that provided pilot sites and facilitated survey administration in those sites. We also wish to thank our AHRQ project officer William Encinosa for his support throughout the project as well as Stephen Hines, Kevin Kenward and Marie Cleary-Fishman. A portion of this work was previously presented at an AHRQ conference.
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Contributors JS led the overall project, survey development and manuscript preparation. KZ conducted data analysis and manuscript writing. NY, TF, LG, MF and SS were involved in survey development and data collection. NY and SAS conducted data analysis and interpretation of results. All authors reviewed and approved the manuscript.
Funding This study was supported by Agency for Healthcare Research and Quality (Contract No. HHSA290201000025I).
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.