Article Text
Abstract
Introduction Delphi procedures are frequently used to develop performance indicators, but little is known about the validity of this method. We aimed to examine the consistency of indicator selection across different procedures and across different panels.
Methods Analysis of three indicator set development procedures: the EPA Cardio project, which used international GP panels; the UniRap project, a Dutch GP indicator project; and the Vitale Vaten project, which used a national multidisciplinary health professional panel and a stakeholder panel.
Results With respect to clinical indicators, consistency between procedures varied according to the origin of the indicators. In Vitale Vaten the multidisciplinary panel of health professionals validated 63% from the international EPA Cardio indicators again. From the UniRap GP set only 13% was rated valid again.
Considering organisational indicators, 27 indicators were rated in both EPA Cardio and Vitale Vaten. In the Vitale Vaten project 17 indicators (63%) were validated, including eight of the nine indicators validated in EPA Cardio.
Consistency between panels was moderate, giving a decisive role to the health professional panel, being the most critical.
Conclusion The consistency of selected performance indicators varied across procedures and panels. Further research is needed to identify underlying determinants of this variation.
- Quality indicators
- healthcare
- Delphi technique
- cardiovascular diseases
- qualitative research
- healthcare quality
- organisation
- quality of care
Statistics from Altmetric.com
Supplementary materials
Web Only File
Files in this Data Supplement:
Footnotes
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.