Summary points
What was already known before this
Evidence-based1 application of technologies, methods, and interventions is recognised as good practice and ethically important in all aspects of health care [1]; the technologies, methods, and interventions have to be proven to be safe, effective, and the most appropriate compared with other methods and alternative solutions.
Given the essential role of information technology (IT) systems on the delivery of modern health care, and the dependence of health professionals and organizations on them, it is imperative that they are thoroughly assessed through robust evaluations as with any other form of health process or technology. This principle is advocated and elaborated in the Declaration of Innsbruck [2].
In the past decades it has been demonstrated that IT systems can not only be beneficial, but also can have unintended, potentially detrimental effects as documented by [3], [4], just like other interventions in (social) environments as extensively described in [5]. It is imperative to monitor IT system implementations and their effects during the whole life cycle of the system. Unintended and intended consequences as well as the socio-technical circumstances under which these occur are important to report since they provide insight that can inform system developers and implementations of similar systems elsewhere. Although there might be pressure to suppress bad news, there are several overriding imperatives for reporting unintended consequences [6]. Also assessment of the development and implementation process itself provides indications on what is good practice and hence contributes to successful Health Informatics applications [7].
When it comes to decisions how best (if at all) to use IT systems for a particular task in the health care delivery process, objective appraisal of opportunities and options requires access to available evidence. Part of this evidence can be found in the scientific literature. Besides the fact that the volume and coverage of evaluation studies in the literature is small given the importance and impact of Health Informatics on modern health care practice and delivery [8], the evidence that is available is to some extent difficult to appreciate due to poor reporting practice [9].
There is growing published evidence of the impact of Health Informatics2 on health care [10], and, increasingly, reviews appear that summarize the available evidence in the form of a narrative review, a systematic review, or a formal meta-analysis (e.g. [11]). However, the wide variety in the kinds of IT systems and their application domains as well as the various kinds of outcome measures have limited the generalisability of findings, and thus have hampered meta-analysis, as reported for example by [11]. Other studies too have shown that publications on the evaluation of IT interventions in health care have several shortcomings that severely hamper the proper appraisal of these publications, see the review in [12, pp. 243–323].
During a specially convened expert workshop on Health Informatics evaluation – HISEVAL – sponsored by the European Science Foundation and held in Innsbruck [2], the concern was raised that without proper guidelines for the design, planning, execution, and reporting of evaluation studies in Health Informatics, it would be difficult to built up a proper evidence base that can be used to make informed decisions regarding IT interventions in health care.
In other domains of medicine, these problems have also been identified. Work done in the early 1990s lead to the publication of the CONSORT statement in 1996 [13]. The CONSORT statement provides guidelines for the publication of randomized controlled clinical trials (RCTs). This statement has been adopted by many medical journals. Later this statement was revised [14] and has been extended to cover specific kinds of RCT designs such as cluster RCTs [15] or RCTs assessing nonpharmacologic treatments [16] and has been shortened for reporting RCTs in journal and conference abstracts [17]. Several other guidelines have been developed following the approach of the CONSORT statement; for example the QUOROM statement for reporting of meta-analyses [18], STARD for reporting of diagnostic studies [19] and STROBE for observational studies [20]. An overview of the various guidelines has been published in [21]. Such an overview is also available at the EQUATOR-network website [22].
CONSORT has proven its value over time. Studies have demonstrated that there is more quality in the reporting of controlled studies after the introduction of the CONSORT statement [23], [24]. Guidelines for good reporting of studies are likely to have an influence on the quality of the studies themselves as well, because of the requirement for a clear demonstration of sound scientific methodology.
Health Informatics is a significant area of health systems investment, and potentially affects every professional and patient. It is therefore evident that Health Informatics should adopt similar robust guidelines as to build a more solid evidence base. Health Informatics applications potentially have effects on health care organizations, health care delivery and outcomes, therefore a Health Informatics application may not directly affect the medical condition of the patient – as drugs do – but it will generally have an indirect effect by assisting the care givers in their decisions and their patient management. The study designs that are covered by CONSORT and other reporting guidelines are not always the most appropriate in Health Informatics evaluation research [2], [25], [26]. Other study designs, both qualitative and quantitative ones, are frequently used.
These observations have led us to the development of guidelines for reporting of evaluation studies in Health Informatics, which build upon work of others, yet also take into account some specific issues that are often central to Health Informatics evaluation studies. They are intended to be applicable for a spectrum of quantitative and qualitative study designs as found in Health Informatics research.
The objective of STARE-HI (STAtement on the Reporting of Evaluation studies in Health Informatics) is to provide guidelines for writing evaluation reports in Health Informatics which can be reliably interpreted by subsequent readers; and by doing this to improve the quality of published evaluation studies in Health Informatics; and thus to improve the evidence base of Health Informatics.
These objectives are achieved by presenting guidelines for reporting, which are formatted as a checklist with
An initial set of items was drafted by the editorial team (represented by the authors of this paper) based on discussions at the HISEVAL workshop and on their experience with assessing the quality of papers for either a review or a meta-analysis or as part of the editorial process as reviewers and editors. The CONSORT statement [13], [27], criteria for reviewers of biomedical informatics manuscripts [28], the QUOROM statement [18], the STARD statement [19] and other more general recommendations
The scope of STARE-HI is to provide guidelines for the reporting of evaluations in Health Informatics, independent of evaluation method used. Therefore, these guidelines have a general character, with a main focus on the description of the context in which the study took place, the description of the methodology, general recommendations for the reporting of results, and the structuring of the discussion. In cases where a study design has been selected for which already a more specific guideline
1. Title
The title should give a clear indication of the type of system evaluated, the study question and the study design. The use of the term “evaluation” (or “assessment” or “study”) preceded by a specification of the type of study in the title helps to detect evaluation studies (e.g. “Evaluation of the effect of a CPOE system on medication errors: a retrospective record analysis”).
2. Abstract
The abstract should preferably be structured and must clearly describe the objective, setting,
STARE-HI was developed to provide guidelines for writing and interpreting evaluation reports in Health Informatics, by doing this to improve the quality of published IT evaluation studies in Health Informatics and thus to improve the evidence base of Health Informatics. It encourages transparency in reporting of IT evaluation studies.
STARE-HI was developed in an iterative process involving volunteer experts from various Health Informatics domains. No formal procedures (e.g. Delphi technique) or
We present STARE-HI as a guideline to report IT evaluation studies in health care with detailed recommendations for each aspect that is particularly relevant for an evaluation study.
Whether STARE-HI is feasible for the broad range of (quantitative and qualitative) Health Informatics evaluation papers can only be shown when it is used by authors and editors. We invite anybody to report their experience that may be incorporated in subsequent updates of STARE-HI. Subsequent more rigid evaluation
The idea for STARE-HI was raised during the HISEVAL workshop in Innsbruck [2]. J.T. took the initiative to develop STARE-HI, he is the guarantor of the study. J.T. and E.A. drafted a first list of issues and drafted the first version of the manuscript. J.B., N.d.K., P.N., and M.R. all contributed by critically assessing the items and their descriptions in several iterations. They have made suggestions for expansion and provided various parts of the text. J.T. and E.A. integrated the various
JT is editor of the International Journal of Medical Informatics. He was chair of the working group on Technology Assessment and Quality Development of the International Medical Informatics Association (IMIA) with JB as co-chair. Currently NdK chairs this working group.
EA is chair of the Working Group on Assessment of Health Information Systems of the European Federation of Medical Informatics (EFMI). JB and PN are co-chairs of this working group. Summary points What was already known before this
This work has been validated and enriched by comments from many during its preparation and development. In particular we acknowledge the contributions of: Jos Aarts, Emily Campbell, Petra Knaup, Christof Machan, Zahra Niazkhani, Christian Nøhr, Habib Pirnejad, Joshua Richardson, Rainer Röhrig, Dean F. Sittig, Murat Sincan, Christa Wessel, Johanna Westbrook and Jeremy Wyatt.
Endorsements: EFMI has formally endorsed STARE-HI in their board meeting of 30 May 2007 in Brijuni, Croatia. In November