Article Text

Download PDFPDF

Quality lines
Free

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

EDITOR’S CHOICE

Discussion about differences between research and improvement has a long history and understanding the distinction may have practical consequences. To protect patients, research, an optional activity, is regulated through institutional processes. But, improvement programmes within routine practice are not, usually, subjected to regulation. No improvement activity should put patients at risk. But should obligatory activities be scrutinised as research projects? These important issues are discussed by Lynn (p 67) and Doyal (p 11). Please enter this debate and send your views through the rapid response facility on the QSHC website, http://www.qshc.com.

Research findings feed into improvement work. Being able to link what is done to what happens to patients is crucial for choosing data to guide improvement work. Taking β blockers and aspirin after a heart attack is an effective intervention that increases survival. But as Bradley and colleagues describe (p 26), even in such an apparently simple example, getting data and feeding it back to clinicians so they feel able to use it with confidence is not straightforward. Local champions may be necessary for enabling data to be used to stimulate reflection. However, linking back from what happens to what was done may not always be fruitful. Readmission rates, which vary between hospitals, are considered possible markers of the quality of care. But Luthi and colleagues (p 46) suggest that when analysed for patients with heart failure, readmission rate is an unsatisfactory indicator of previous care. And, in another study, Scott and colleagues (p 32) could only validate 5 of 12 “diagnosis-outcome” indicators as markers of systematic variation in care.

Much data are generated in the pursuit of improving care, but much that is unused, not understood, or not believed, has little impact on practice. Collecting such data may not harm patients directly—which is why improvement is not scrutinised like research. But surely it represents lost opportunities if collecting it impedes accruing information that could make a difference. We need to find out which data are relevant, credible, and usable.

HEART FAILURE AND READMISSION

Outcome measures such as readmission have been often proposed as sentinel quality indicator because they are easily computed and identifiable through administrative data. In an evaluation study conducted with patients hospitalised for heart failure in three Swiss academic medical centres, we show that readmission did not predict evidence-based process quality indicators in the bivariate and multivariate analyses for patients with heart failure. Furthermore, when using process quality indicator as gold standard, readmission was not a valid indicator of the quality of care (low sensitivity and specificity). These findings provide new evidence about the limitations of using this outcome as a quality of care indicator.
 See p 46

LESSONS LEARNED IN QUALITY IMPROVEMENT

Performance monitoring and data feedback can be effective tools in bringing about changes in physician practice. However, these efforts can be expensive to implement and many fail. We identify best practices in implementing data feedback in the hospital setting by revealing the common pitfalls of such initiatives as well as effective strategies for overcoming these challenges. Specific experiences based on interviews with 45 clinical and administrative staff members in 8 US hospitals demonstrate the diversity of efforts. We argue that the effectiveness of data feedback efforts depends not only on the quality and timeliness of the data, but also the organisational context and physician leadership available to nurture and promote such efforts.
 See p 26

PSYCHIATRIC INPATIENT CARE

The English Department of Health has proposed that prescribing indicators should form part of the basket of measures used to performance manage mental health services. This study considers the justification for this by examining seven such indicators derived from prescribing data for more than 4000 inpatients in 49 British mental health services. Six of the indicators were intercorrelated and did appear to relate to a common attribute of the services. The six indicators, singly and combined, probably reflect a complex set of interacting factors that affect prescribing decisions on the wards concerned. These are likely to include case mix, the quality of the ward environment, and aspects of service organisation, as well as the quality of clinical practice. Indicators will need to be interpreted with considerable care if they are to be used for performance management.
 See p 40

In this issue there are several changes. Firstly, “Action points” previously expertly compiled by Tim Albert had moved from the back to the front of the journal, and is now called Quality lines. It now consists of authors’ own summaries of their papers. This page will include a short editor’s choice. Many thanks to Tim for writing Action points. Secondly, quality improvement reports will be a regular feature (see page 52 this issue). Thirdly, we want to increase the opportunity for readers to interact with the journal and so there is now a “Quality Ideas” button on the homepage of the website (http://www.qshc.com/misc/ideas.shtml), which will allow you to post any tips for improving the quality or safety of care that you have found useful that might help others.

Linked Articles