Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Finding out about how others’ schemes to implement change succeed or why they fail can be extremely helpful. It can save time and effort and may accelerate improvements in service delivery. One of the stated aims of this journal is to publish such quality improvement reports along-side papers that report the results of relevant research. The editorial team are aware through discussion with colleagues, from papers presented at meetings, and reading local reports that many people are involved in useful and informative quality improvement projects that could have valuable messages for others. And yet in the past seven and a half years we have only published 12 quality improvement reports—the most recent one in December 1995.
We rely on submitted reports, and one of the reasons for this dearth of published quality improvement reports may be that people are simply too busy improving care to have time to write. But there may be other barriers. The standard form for writing papers in medical journals is the scientific IMRaD (introduction, methods, results, and discussion) structure. This is a convenient and helpful structure for writing about research. When writing a quality improvement report this structure does not quite fit, however. For example, there will be a first methods section—when the measurements are made—and a first results section—when the results are analysed. However, there follows a second methods section describing the implementation of change, perhaps followed by a third methods section when the measurements are repeated to assess progress, and then a second results section describing the improvements. Writing quality improvement reports in this way may not only be difficult but may result in a paper that does not convey the lessons that others would find useful. The editorial team has therefore developed a new structure (box) for describing quality improvement work that we think will reflect this work more accurately and which we hope will encourage authors to write about their experience. A quality improvement report using this structure is republished in this issue.1
Box 1 Structure for quality improvement reports
Brief description of context: relevant details of staff and function of department, team, unit, patient group.
Outline of problem: what were you trying to accomplish?
Key measures for improvement: what would constitute improvement in view of patients?
Process of gathering information: methods used to assess problems.
Analysis and interpretation: how did this information help your understanding of the problem?
Strategy for change: what actual changes were made, how were they implemented, and who was involved in the change process?
Effects of change: did this lead to improvement for patients—how did you know?
Next steps: what you have learnt/achieved and how will you take this forward?
There is also another fundamental difference between quality improvement reports and the reports of original research. Research seeks broadly to produce generalisable results. Thus, trials of thrombolytic treatment in acute myocardial infarction sought to determine whether trombolysis reduced subsequent mortality, such that the results could be generalised to coronary care units and medical wards treating such patients. On the other hand, a local audit or quality improvement project, which seeks to assess whether patients are appropriately treated with thrombolytic therapy does so to monitor and ensure the implementation of evidence based treatment in practice. The results of such a study are not generalisable to other coronary care units in the same way as the preceding research evidence, and for many this would suggest that the work is not publishable. We would disagree. The results may not be generalisable, they are unique to the unit where the audit was undertaken—and most probably to the time of the audit. Any indentified problem needs local diagnosis and local change to occur to create improvement. But a well written and structured quality improvement report may include generalisable methods and strategies for change from which others undertaking similar audits would benefit. Thus, good quality improvement reports should offer a means of disseminating good practice, and there is little doubt in our minds that much that is good about such work is not as yet widely reported. As a result practitioners are denied the opportunity to learn from each other as the science of audit and quality improvement matures.
All quality improvement reports submitted to the journal will be peer reviewed and the decision on acceptance made by the editorial team. Quality improvement reports do not necessarily have to report success. However, all should contain lessons or messages that have relevance to others and that could help them in the process of improving care. Measurements need to be robust and rigorous and results analysed and interpreted with care. Quality improvement reports should include a reflection on the cause of deficiencies in care. Problems associated with implementing change should not be glossed over but described, and possible causes and solutions discussed.
We hope that the new structure will encourage those with practical experience of quality improvement to write about it in a way that will help others. And we hope that readers will find the new quality improvement reports interesting and useful—please let us know.