Statistics from Altmetric.com
Patient-reported outcome measures (PROMs) are questionnaires that assess patients’ health, health-related quality of life and other health-related constructs.1 They have traditionally been used to describe the burden of disease and to establish the comparative effectiveness of different treatments.2 There is increasing interest in the use of PROMs to improve health services. Many policymakers and researchers believe that PROMs provide an essential perspective on the quality of health services,2–4 and it has been suggested that they have the potential to transform how healthcare is organised and delivered.5 PROMs have been used to compare and reward the performance of healthcare providers in England,2 the USA,6 ,7 Australia8–10 and Sweden,7 and their potential to improve quality has also been recognised in Canada4 and the Netherlands.11
The mechanisms through which PROMs feedback to healthcare professionals might improve the quality of healthcare depends on the type of feedback provided.
PROMs may be used to provide professionals with information about their performance against their peers.1 ,2 It is posited that PROMs should act to improve the quality of healthcare in the same way as any other benchmarking tool.2 ,3 Peer benchmarking is thought to stimulate an intrinsic desire in healthcare professionals to succeed relative to their peers.12 In addition, it is hypothesised that professionals and organisations are motivated to avoid any negative consequences of peer benchmarking. These consequences depend on the extent to which the benchmarking exercise is used to support broader quality improvement strategies such as clinical governance, payment by performance, clinical commissioning and patient choice.2 ,13 For example, PROMs are used alongside other indicators to measure the performance of English National Health Service (NHS) providers and drive up quality throughout the NHS “by encouraging a change in culture and behaviour focused on health outcomes not process”.14 PROMs are also used in England to guide the award of ‘bonus’ payments to NHS Trusts,15 to inform the decisions of commissioning bodies about which NHS Trusts to contract with16 and to facilitate patients when choosing a provider for certain elective surgical procedures.17 Finally, it is hypothesised that although the benchmarking of outcomes does not provide a direct insight into the causes of inter-professional performance variation, it can stimulate audit and research activities that might lead to the discovery of these causes. For example, professionals who are discovered to have poor performance might learn from the practices of those with the best performance.18
Patient-level PROMs feedback can also be provided to professionals. This is hypothesised to facilitate personalised care management by highlighting the concerns and needs of individual patients in a structured format.19 The information can be used to highlight previously unrecognised health problems,20 assess the effectiveness of different treatment plans,21 monitor disease progression,22 stimulate better communication23 and promote shared decision making.24 ,25 Specific quality improvements that might arise from a consideration of PROMs feedback include ordering additional tests, referring the patient to a new specialist, amending prescribed medicines or treatments, issuing personalised advice and education on symptom management, and altering the goals of treatment plans to better reflect patient concerns.26 ,27
The evidence supporting the effectiveness of PROMs in contributing to improvements in the quality of healthcare is heterogeneous, and it has been difficult to draw definitive conclusions about their impact on patient care.28 While there is some evidence that PROMs are effective in enhancing patient–clinician communication and helping to recognise new health issues, there is little evidence that PROMs feedback to healthcare professionals changes care management or improves patient outcomes.28 ,29 This evidence should be considered alongside findings from the broader literature. First, the effects of audit and feedback interventions are generally small to moderate and we understand relatively little about the complex process dynamics associated with successful interventions.30 Second, the use of theory in studies of audit and feedback is rare, which signals a need for more theoretically informed interventions.31
Qualitative research with end users plays an important role in helping us understand why interventions are ineffective in practice and in the development of theoretical models to support successful implementation. Examining first-hand experiences may provide unique insights into the challenges associated with implementing and using PROMs in practice.32 ,33 Synthesising this evidence may help explain the modest impact of PROMs on professionals’ behaviour to date. Two previous reviews have reported the evidence about professionals’ views on the use of outcome measures in general, not specifically focusing on PROMs.34 ,35 The first was a non-systematic review that provided an overview of the barriers to the routine use of outcome measures.34 The second was a systematic review that looked at the barriers and facilitators to the use of outcome measures in routine practice.35 This review was limited to the views of allied health professionals and excluded professions such as medicine and nursing. Given the unique methods and perspectives introduced by PROMs, and their broad use across different professional groups, there is a clear need for a systematic review of the qualitative literature that focuses exclusively on PROMs and includes all relevant healthcare professionals.
This review aimed to identify qualitative studies that have investigated the experiences of healthcare professionals with the use of PROMs as a means to improve the quality of healthcare and to synthesise findings about the barriers and facilitators to their use. The review also explores how the characteristics of different studies influenced the results observed.
Studies that met the following criteria were included: language of publication was English; participants were healthcare professionals; examined professionals’ views of PROMs after receiving PROMs feedback about individual patients or groups of patients; and used a qualitative design.
A search without time restriction was performed in PubMed, PsycINFO and CINAHL in August 2013 (see online supplementary appendix 1). Reference lists of included papers were screened for additional studies.
A search strategy was developed comprising three blocks of terms relating to PROMs, qualitative research and professionals’ opinions. Brettle et al36 previously developed a comprehensive filter for PROMs, which was used as the first block for this search. The second block was based on a published search filter developed to capture qualitative evidence.37 The third block was developed by the authors to meet the aims of this specific review. It combined terms relating to ‘professionals’ and ‘opinions’, and used a proximity operator that identified any combination of these terms when they appeared within three words of each other.
MBB initially screened the titles and abstracts of articles retrieved by the search strategy. The full text of potentially relevant articles was evaluated if there was not enough information to make an informed decision about relevance to the systematic review from the abstract. Where there was continued uncertainty about whether such papers met the inclusion criteria, another reviewer (JPB) was consulted for a second opinion and discrepancies were discussed to form a consensus.
Data collection process
All articles that met the inclusion criteria underwent data extraction for information about study aims, location and setting, study design, participants, recruitment, PROMs used, level of application, feedback strategy and study findings. A quality appraisal of included studies using an established toolkit was performed by MBB and reviewed by JPB.38 The quality appraisal assessed the following criteria: appropriate design, appropriate recruitment strategy, appropriate data collection method, reflexivity, ethical research, appropriate analytic method, appropriate discussion of findings and overall value. A sensitivity analysis was performed using matrices to compare the patterns of themes identified in studies of different quality.
Synthesis of results
Thematic synthesis was used to analyse the papers included in the review.39 It compares themes across studies, looks at study characteristics to help explain differences in findings and develops interpretations beyond original studies to generate analytical themes.39 The synthesis was performed by entering the entire results section from each study into QSR International's NVivo 10 software.40 The synthesis involved three stages: free line-by-line coding of findings from primary studies, categorising free codes to develop descriptive codes and developing analytical themes that explored the relevance of the descriptive codes in the context of the research question.39 Study characteristics and findings were cross-referenced on a matrix to explore whether thematic patterns were associated with certain studies. Meetings and correspondence between the coauthors throughout the analysis process helped to evolve the themes and challenge the interpretation of the data.
In total, 8344 potentially relevant publications were identified by our search strategy and 7930 were excluded on the basis of their titles. An abstract review of the remaining 414 articles was performed and 87 were chosen for full-text review. Seventy-one articles were excluded at the full-text stage, leaving 16 relevant articles (figure 1 and table 1). These were an entirely different set of studies to those included in the only previous systematic review of professional opinions about the routine use of outcome measures.35
Over half of the included studies were carried out in the UK (n=9). The remainder took place in Sweden (n=3), Australia (n=2), the USA (n=1) and Canada (n=1). The study settings included primary care (n=5), hospital care (n=4), hospice care (n=2) and mixed settings (n=4). The setting of one study was not clear.41
The healthcare professionals studied included physicians (n=4), nurses (n=2) and therapists (n=1). Eight studies included a mixture of healthcare professionals and one study did not explicitly state the healthcare professionals involved.41 The treatment focus of the studies was mental health (n=7), palliative care (n=5), oncology (n=1), acute care (n=1), respiratory medicine (n=1) and rheumatoid arthritis (n=1).
Qualitative data were collected through interviews in nine studies, focus groups in five studies and a mixture of interviews and focus groups in two studies. Most studies provided PROMs feedback to healthcare professionals at the individual patient level (n=13). Two studies provided feedback about the average scores of groups of patients and in one study this aspect of the design was unclear.42 All studies provided insights into how PROMs data are used by professionals in practice and a subset of 11 studies also explored the feasibility of data collection.
The quality appraisal exercise found that the included studies were generally good at justifying the research design, providing details on the participants included in the research, explaining the data collection process, clarifying ethical issues, outlining the data analysis methods and the findings, and identifying the value of the research. However, some shortcomings that emerged from the critical appraisal included unclear rationale for the sampling methods used; a failure to explicitly justify the chosen data collection methods; inadequate incorporation of reflexivity into the research process; insufficient detail about the rigour of analysis; and inadequate methods to increase the credibility of findings (see online supplementary appendix 2). Three studies were judged to be of a higher standard than the rest on these latter criteria.43–45
Synthesis of results
The themes and subthemes that emerged from the thematic synthesis are described in table 2, and excerpts from the original studies are provided for illustrative purposes. A detailed description of the themes identified in each study is displayed in the online supplementary appendix 3. As each paper had slightly different aims, their overall contribution to each theme depended on the focus of the original studies.
Theme 1: practical considerations
This theme captures issues around the data collection process and the effective use of the information. Practical issues were identified in 14 studies.8 ,9 ,41 ,42 ,44–53 In nine studies, the workload associated with collecting and analysing data was identified as a significant barrier to the routine use of PROMs.8 ,9 ,41 ,42 ,44 ,48–50 ,53 However, some of the studies identified that workloads could be reduced if PROMs feedback was integrated naturally into the consultation process.45 ,49 ,51 The difficulty or ease of PROMs administration also emerged as a determinant of successful implementation. Barriers emerged when the questionnaire was not user-friendly,8 ,9 ,41 ,42 ,44 ,45 ,47 ,48 ,50 ,53 but data collection was facilitated when patients had few difficulties completing the measure.41 ,42 ,47 Some studies identified a lack of collaboration between colleagues as leading to the burden of data collection being placed on a small number of staff members.9 ,42 ,45 ,48 Lack of clear guidelines on the data collection process (patient eligibility, timing, frequency and location of administration) and on how to correctly analyse and interpret the data created further barriers.8 ,42 ,44 ,47 ,49 ,50 ,52 However, some studies identified that flexibility in the data collection process was necessary due to variability in the acuity of patients.41 ,51 Professionals were more willing to engage in the process when management showed appreciation for the additional work involved and when management themselves became deeply involved in the process.8 ,9 ,42
Study participants also stated that appropriate training was necessary to effectively engage in the process. They specifically proposed that a lack of training on how to recruit patients, deal with difficult scenarios and effectively use the information created inevitable barriers.8 ,9 ,42 ,44 ,48 ,49 ,51 Some studies found that having time to become familiar with the measures prior to implementation was a facilitating factor.8 ,9 ,41 ,50 ,51 Professionals recognised that support during the initiation stage of the data collection was helpful. The effective use of PROMs data was curtailed when statistical support was not available as professionals lacked the expertise to appropriately analyse and interpret the data.9 ,42 ,44 ,45 ,53 Professionals recognised that they also required support from the wider service to adequately deal with the issues that the measurement highlighted such as referral to specialist professionals or access to suitable treatments.44 ,45 Lastly, the use of technology was recognised as a barrier when it slowed down the process8 ,9 ,51 and a facilitator when it made the collection of the data and dissemination of the findings more efficient.8 ,46 ,49
Theme 2: valuing the data
This theme captures professionals’ attitudes to the use of PROMs. It was identified in 11 studies.8 ,9 ,43–45 ,48 ,49 ,51–54 Barriers to appreciating the value of PROMs emerged when the objectives for collection were not transparent. In such circumstances, professionals questioned the motives behind the data collection and expressed fear about how the results would impact on their practice and patient care.8 ,9 ,43 ,48 ,51 ,53 Furthermore, barriers were identified when professionals were not open to receiving feedback or changing their clinical practice.8 ,9 ,43–45 ,49 ,51–54
Theme 3: making sense of the data
This theme captures the methodological considerations that are associated with PROMs. Methodological factors were identified in 13 studies.8 ,9 ,41–46 48–50 ,52 ,53 The interpretability of PROMs data influenced professionals’ opinions about their scientific value in a quality improvement context.8 Professionals appreciated the graphic presentation of results,49 but identified the need for more sophisticated feedback that clearly depicts what constitutes a clinically important change.8 Others requested aggregated data about the effectiveness of different treatments to complement data about individual patients.46 Concerns about the validity of PROMs emerged in many studies as professionals questioned whether the data produced a genuine reflection of care.8 ,9 ,41 ,43–45 ,48 ,50 ,52 ,53 Professionals identified situations where the validity of measurement was compromised including when patients did not complete the measures accurately, provided socially desirable responses, hid symptoms, failed to follow instructions or when staff administered the measure incorrectly or in a non-standardised manner. Some professionals also criticised the sensitivity of the measures to accurately detect a change in specific patient populations.41 ,42 ,53
Theme 4: impact on patient care
This theme was identified in all studies and captures issues around the impact of PROMs on care processes and outcomes. There were mixed views regarding the causal link between the use of PROMs and improvements in patient care. Professionals identified that the use of PROMs in practice had the potential to improve the processes of care by enhancing communication, increasing patient education, promoting joint decision making, screening for health issues, monitoring changes in disease severity and response to treatment, and stimulating better care planning. Professionals appreciated PROMs as a tool to complement their own clinical judgement and to stimulate professional development. The role of PROMs was also recognised as a research and audit tool.41 ,42 ,48 However, some professionals found that the measures were not of clinical value as the results provided them with no new information.8 ,9 ,41 ,42 ,44 ,46 ,50 ,53 ,54 Professionals highlighted some indirect effects of using PROMs on patient care. Negative effects included the intrusive nature of collection on the patient's privacy and the doctor–patient interaction, the capacity to narrow the focus of a consultation and the opportunity cost for what were perceived to be more important aspects of care. Furthermore, professionals found that certain questions distressed patients and thought the process had the potential to damage the patient–clinician relationship.8 ,9 ,41–45 ,48 ,50 ,53 Positive indirect effects of collecting PROMs were also identified, which included the ability to build patient confidence in the competence of the professional, to manage patient expectations and to assist in handing responsibility of care back to the patient.42 ,43 ,45 ,46 ,48 ,50 ,51
Explaining the findings
The relationship between themes and study characteristics was examined to help explain the findings. The characteristics examined included the professional group under study, the study setting, the healthcare issue under examination and the function of the PROM. No explicit pattern was explained by the inclusion of different professionals, settings or healthcare issues. However, the function of the PROMs used in individual studies may have influenced the study findings. Practical facilitators were most likely to be observed in studies where PROMs functioned as a care management tool; however, these studies also tended to use computer administration and feedback.8 ,9 ,45 ,46 ,49 ,51 A similar trend was observed with the facilitators identified in the methodological theme.8 ,9 ,46 ,49 In addition, a lack of clarity regarding the objectives for measurement emerged as a barrier, and involvement of management emerged as a facilitator, when PROMs were used as performance monitoring tools.8 ,9 Only one study did not identify any positive impacts of using PROMs. This study employed PROMs as a screening and care management tool for mental health issues.44 The studies that did not identify any negative aspects of collecting PROMs employed PROMs as care management tools.47 ,49 ,51 ,52
The barriers and facilitators identified in this review were categorised into practical considerations, attitudes towards the value of the data, methodological concerns and the impact of feedback on patient care. Practical considerations included workload implications, the ease of data collection, the level of collaboration among colleagues, the provision of clear guidelines for implementation, the level of managerial involvement, the availability of training and support, and the use of technology. Attitudes towards the use of PROMs were associated with the transparency of objectives, and the openness to feedback and change. Methodological concerns identified included the interpretability of the information and the validity of the measures. The impact of the feedback depended on the usefulness of the information to guide decisions on patient care and the indirect effects of routinely collecting PROMs data.
There is a subtle but important distinction between the need for support to correctly analyse and interpret PROMs data, which we have classified as a practical issue, and the concerns raised by professionals about the validity and interpretability of PROMs, which we have classified as a methodological issue. In the ‘practical’ theme, we are addressing the support (statistical help and training) that professionals feel they need in order to familiarise themselves with a relatively alien concept. This is different from fundamental scientific concerns about PROMs that may endure even if statistical support and training are provided.
The themes presented in this review were consistent across different studies. There was some evidence that PROMs were viewed more positively when they functioned as care management tools for individual patients and more negatively when producing performance data about the care delivered by professionals to groups of patients. This may indicate that PROMs have more value to professionals when they produce data that can be linked to individual patient care, but this interpretation should be considered with caution due to the small number of studies where PROMs were used as performance monitoring tools.
Strengths and limitations
This is the first review to synthesise the qualitative evidence on the experiences of professionals who have first-hand experience of the use of PROMs as a means to improving the quality of healthcare. This review has some limitations. First, the review only focused on English-language articles and it is possible that different experiences with the use of PROMs may be apparent in countries where English is not the first language. Second, only one reviewer performed the initial screening and study selection, and although reference searching was performed to reduce the likelihood of missing appropriate studies there is still a small chance that some relevant literature was missed. Third, the results are based on the credibility of findings in the original studies and there is a lack of detail in all but three studies about the use of methods to enhance credibility. However, the themes identified are quite logical and are similar to those presented in previous reviews of the use of outcome measures generally.34 ,35 Fourth, the study presents only the perceptions of healthcare professionals and it does not attempt to represent the views of patients or healthcare managers about the value of PROMs.
Relevance to previous literature
The themes identified in this systematic review are well-known barriers and facilitators to the success of audit and feedback interventions in other contexts. Our systematic review confirms the importance of these issues while revealing new insights specific to PROMs. For example, practical barriers such as inadequate organisational and technical support have been comprehensively documented in the quality improvement literature.55–57 This review deepens our understanding of these issues in the context of PROMs by highlighting the considerable barriers associated with data collection and the need for specific training in the use and interpretation of psychometric instruments. Similarly, there is evidence from the broader literature that interventions are more likely to fail when professionals display negative attitudes and are suspicious about the purpose of audit and feedback.58–60 Our review highlights the specific issues associated with negative attitudes to PROMs, including methodological concerns about the validity of patient-reported data and worries about the potential for routine PROMs administration to disrupt patient care. It is of note that these concerns have also been voiced by patients in separate qualitative studies.61 ,62 Finally, there is evidence from other contexts that feedback has the greatest impact when it is focused on specific task-based solutions and delivered in a goal-setting context.30 ,63 Our review underlines how difficult it is for PROMs to satisfy these criteria given the problems experienced by professionals in attempting to interpret PROMs feedback and turn the information into concrete quality improvement solutions.
Implications for clinicians and policymakers, and future research
It is clear that many professionals remain to be convinced about the value of PROMs but that they could be encouraged to engage with their use given the right practical and methodological support. Greater investment in data collection technology could relieve much of the human workload and make feedback more timely.64 Greater clarity over the objectives of data collection and investment in methodological training are additional solutions. It is interesting that PROMs feedback have shown greatest promise in the area of mental health, a field where the use of these measures has long been embedded in routine practice and where professional attitudes may be more positive as a consequence.21 ,24 ,28 ,65 However, it is important to understand the cause of any resistance as professionals may have good reasons for not implementing or using PROMs.66 For example, PROMs have well-known problems with interpretability and professionals may therefore have legitimate grounds for resisting their use.33 ,67 The appropriateness of using PROMs in a quality improvement context is also a source of legitimate debate. Most commonly used PROMs were developed to evaluate the effectiveness of different treatments and therefore may not provide sufficient or appropriate information to guide quality improvement activities. This problem is indicative of a relatively poor theoretical basis for the use of PROMs in a quality improvement context.27
The barriers identified in this review may represent a failing on the part of those who advocate the use of PROMs to sufficiently engage professionals in the planning stage and to acknowledge the conflict between managerial and professional objectives.68 ,69 A deeper understanding of the motivations of different stakeholders is essential to disentangle how PROMs can be used to improve quality in reality. Further qualitative studies with professionals and case studies of PROMs initiatives are essential.7 This would help researchers and policymakers gain an understanding of how this information impacts on clinical decision making. Lastly, evidence is required to identify the specific healthcare issues and patient populations that have large variability in outcomes as these are where PROMs data are likely to have the greatest impact. Otherwise, as Wolpert points out, inappropriately implementing PROMs in practice may only lead to an increased bureaucratic burden with little positive impact on care.70
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.