Article Text
Statistics from Altmetric.com
Background and context
Between us, we have approximately 40 years of experience of designing and conducting community-based evaluations of a variety of health programmes, many of which are also struggling with integrating continuous improvement into their daily work. Our own experiences have suggested both the need and the logic of integrating our conceptual approaches to evaluation and improvement, since fundamentally they come down to the same cycle—best represented by the Model for Improvement and the PDSA cycle.1 Our reflections on evaluation and improvement were sparked by reading Lambert and Shearer.2
While readers may be very familiar with the improvement literature from reading this journal and other resources, we also want to draw their attention to the rich and always evolving literature on evaluation. At the 14th invitational ‘Summer Symposium on Building Knowledge for Improvement,’ a group of health professions educators were privileged to engage in discussions with our ‘wizards’ Ray Pawson and Nick Tilley, and explored multiple approaches to evaluation. Many of these approaches have common grounding in the disciplines (particularly in social sciences), yet they have acquired distinct reputations in both academe and practice. While this observation has not been established through the scientific method or testing via a randomised controlled clinical trial, evaluation practitioners and theorists appear to exhibit rock-star/groupie or disciple-like devotion in terms of loyalty to a specific approach. This may be a function of influence at a particular point in time because of a programme officer at a funding agency, an external evaluator or a teacher who expects or promotes a certain approach to evaluation. This sort of devotion is seen for evaluation models such …
Footnotes
Competing interests None.
Provenance and peer review Commissioned; not externally peer reviewed.
Linked Articles
- Quality lines