Article Text

Download PDFPDF

Consensus publication guidelines
Consensus publication guidelines: the next step in the science of quality improvement?
Free
  1. R G Thomson
  1. Correspondence to:
 Professor R G Thomson
 Director of Epidemiology and Research, National Patient Safety Agency and Professor of Epidemiology and Public Health, Newcastle upon Tyne Medical School, UK; Richard.thomsonnewcastle.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The time is ripe for a formal structured review of guidance on quality improvement reports

Samuel Beckett wrote “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.” Fiona Moss and I tried some time ago to produce a structure for publication of quality improvement reports on behalf of this journal that would facilitate and encourage their publication;1 the BMJ subsequently adopted the structure for their authors.2 Now, several years on from these first attempts, Davidoff and Batalden suggest new publication guidelines that build on this earlier version.3 These should be welcomed—in the spirit of improvement and intellectual evolution, it would be very surprising if the first attempt were to remain unchanged and unchallenged.

Before commenting on their proposals, it is worth reflecting on the original purpose of the development of quality improvement reports. This was based on the demand, within an emerging science and practice of quality improvement, for shared learning and dissemination of good practice. Those in the field knew that there were many excellent examples of successful projects where real changes had been made with demonstrable impact upon patient care, but that these examples were rarely disseminated such that others could draw upon this experience.

Why was this? As Davidoff and Batalden3 point out, one reason was the nature of the people responsible for quality improvement work—they are often highly committed people whose primary incentives are delivering and improving good clinical care; for most of them the incentive and the perceived rewards of publication are low, and the often thankless task of writing, submission, revision, rejection, and so on was a distraction from the next patient or the next round of quality improvement.

Alongside this were other interwoven issues. Firstly, the nature of original publication in scientific journals (in contrast to articles such as reviews, opinion pieces, editorials) is largely focused on original research articles, where the aim is to report generalisable results. Secondly, the structure and guidance for writing up original research in the internationally accepted IMRaD (introduction, methods, results and discussion) format was designed to meet the needs of reports of original research and not of quality improvement. Thirdly, editors and reviewers were largely socialised into a mindset that gave a predominance to original research, compounded by the structure and guidance available for peer reviewers and authors. Thus, although not explicit in the original arguments, creating a new structure for quality improvement reports also acknowledged that such reports were different and gave them a focus and identity to enable them to escape the shackles of the traditional journal article.

Times have moved on since then but, even with the availability of the new structure and increased journal capacity to publish such articles, it is still a struggle to get such reports written and submitted; they remain very much a minority of all articles published in this journal. Will the revised proposal help with this?

Before it can do so, I think there are several points to consider. Firstly, exactly what sort of activity should be reported in quality improvement reports? I am not sure that the article by Davidoff and Batalden3 is quite clear about that. It seems to consider not only reports of effective quality improvement projects (as do the current guidelines), but also studies of the efficacy of quality improvement methods. This needs clarification. Studies of the efficacy of methods require, in their purest form, robust intervention studies such as randomised controlled trials (probably cluster randomised) in order to produce generalisable results. And guidelines in this area already exist (e.g. CONSORT4).

Furthermore, such studies are likely to be best applied to methods that can be generalised across a range of settings and topics—for example, the use of statistical process control charts. But the original concept of quality improvement reports, at least to my mind, was to enable practitioners to share and learn from practical examples of projects—for example, a clinician who wants to undertake a project to improve the quality of acute treatment of myocardial infarction in his or her hospital would seek to find examples of others who have done the same in order to be able to apply and/or adapt their methods and experience to his or her own circumstances. This is very much about sharing experience and learning rather than sharing results. Indeed, we argued originally that the methods of quality improvement reports might be more generalisable than the results.1

Secondly, the suggested reversion to the IMRaD structure is worthy of challenge. Does this really fit the purpose of quality improvement reports? The answer may well be “yes” if considering studies of methodological efficacy, but I am not convinced this is the case for reporting practical examples of quality improvement. The authors need to justify this further, not least by explaining why it might be preferable to the present accepted structure? I don’t believe the use of IMRaD is justified on the basis of incorporating “several important additional topics”—those listed in the article (such as prior information available on the problem area and assessment of the project’s limitation) are topics that are clearly covered within already published quality improvement reports using the present structure. These could be made more explicit by a revision of the original guidance without necessitating reversion to IMRaD.

Finally, as the authors point out, most quality improvement work is never made publicly available. This is undoubtedly true, but one only has to think about how many quality improvement projects may be in process in a single acute hospital, and then multiply that up by all acute hospitals internationally, to recognise that that will always be the case. Thus, publication in academic journals is only likely to be a limited, albeit valuable, method for dissemination of such practice; it needs to be part of a suite of methods of publication and dissemination. Included in the former, one might argue that (web based) databases of projects with very limited information but providing contact details for others to communicate with the project leads could be a major development—web sites such as the IHI site5 and the recently released saferhealthcare site6 can contribute here. In the latter, methods such as clinical networks and quality improvement cooperatives can fulfil a similar purpose.

In summary, I welcome the proposal for enhancement of current guidelines for publication of quality improvement reports and the authors’ suggestion for a more structured and formal approach to refining guidance is very sensible. The original guidance developed by QSHC involved a very similar informal process to that described by Davidoff and Batalden.3 The time is now ripe for a more formalised approach, and experience from other groups such as CONSORT4 or, perhaps more relevant, from the International Patient Decision Aids Standards (IPDAS)7 collaboration could be very helpful in developing the next stages proposed by the authors. Both content and structure should be addressed.

The time is ripe for a formal structured review of guidance on quality improvement reports

REFERENCES

Linked Articles