Article Text

Download PDFPDF

Quality improvement reports
Can the sum of projects end up in a program? The strategies that shape quality of care research
  1. Vahé A Kazandjian
  1. Dr Vahé A Kazandjian is the President of the Center for Performance Sciences, a global outcomes research organization, and A/Associate Professor, the Johns Hopkins Bloomberg School of Hygiene and Public Health, Baltimore, Maryland. He is the original architect of and is responsible for the Maryland Quality Indicator Project (QIP), the continuous performance improvement program used worldwide over the last 18 years by more than 2000 healthcare organizations. In the UK alone, over 125 hospitals from the NHS and the private sector have participated in the international component of the QIP since 1992.
  1. Correspondence to:
 Dr V A Kazandjian, Center for Performance Sciences, 6820 Deerpath Road, Elkridge, Maryland 21075–6234, USA;

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Quality improvement projects need to become ongoing sustainable programs if they are to alter culture, mind set, and perceived responsibilities in the practice of medicine.

Quality improvement (QI) projects are now an integral part of the strategy of healthcare systems towards accountability. While the immediate audience of the outcomes of such projects is internal to the care providing organization, accountability to external audiences (communities, government, payers, business coalitions) is increasingly demanded.1 Indeed, while in the past decade outcomes research was primarily the domain of healthcare professionals, now it seems the cornerstone of any accountability strategy. In the US such strategies are translated into “report cards”, in the UK to “league tables”, and elsewhere to “hospital ranking reports”. Even when the methods of analysis have not changed—variation, observed to expected ratios, statistically significant differences in utilization or outcomes rates—the landscape has been expanded to encompass numerous groups asking for accountability.2

To achieve responsiveness to various audiences, QI projects should measure temporal trends in performance, link outcomes to processes, and ascertain the extent of organizational readiness for promoting higher quality and safer systems of delivery. Epidemiological methods of measurement and analysis, specially based on rates, are necessary for a successful QI program, yet not sufficient. Indeed, the key determinant may lie in the very distinction between a “project” and a “program”.


Healthcare providers often do not consider themselves as part of healthcare research. In fact, this chasm separates the concepts of a “QI project” from a “QI program”. When seen as a “project”, the incentives for a change in practice style or in a system's processes are practically non-existent. The reason is that a project has an end point which predisposes those who do not want a change to see it as a passing fad, inconsequential to their beliefs and traditions. A higher likelihood for success exists for approaches that are designed as programs that are ongoing, continuous, and both epidemiological and clinical in nature. These programs are best when they provide, through comparative analysis, performance profiles which providers can emulate and outcomes they want to achieve. This attribute of a QI program reporting from the field contrasts with the sheer distribution of “best practice” guidelines suggested by experts.

“the success of a QI project may be its ability to metamorphose into a QI program”

The QI report by Freeman et al3 in this issue of QSHC elegantly describes the importance of the multi-site comparative analysis and the challenges associated with a project rather than a continuous monitoring program. Indeed, the Anglian audits of hip fracture study suggests that the challenges of a QI project not only concern the methods of measurement and dissemination, but the longer term “buy in” by the providers of the care. In essence, the success of a QI project may be its ability to metamorphose into a QI program.


Starting a project is one thing; keeping it going is another. The sustainability of QI projects has often depended on the demonstration of “impact” rather than description of processes. Indeed, if no correlations are identified—and repeatedly so—between what has been done and what has happened, the project will be unable to answer the “so what?” question from sceptics or those unwilling to challenge the status quo. In contrast, when causal or correlative associations are demonstrated, cost/benefit analyses can follow to show the goodness, acceptability, or affordability of the performance.4 Thus, the way is paved for a sustainable ongoing program, able not only to help providers learn about themselves, but also to shape their accountability strategies towards various audiences.

Multi-site programs (regional, national, or international) are ideally suited for demonstrating performance goodness. The comparative analysis such a setting allows across providers, severity of disease stratified patient groups, or variation in organizational structures is essential for a convincing QI methodology. Once the baseline of comparative performance profiles is established, each site may proceed with its own assessment of acceptability and affordability. Eventually, a “value” will be shown to local audiences interested in knowing how well the healthcare system is doing by them.


I venture to suggest that a true performance measurement and improvement model would not only be an ongoing program, but something that has to become part of the very fabric of medicine. When the practice of medicine is intertwined with its simultaneous evaluation as to its impact on restoring health or improving functional status and quality of life, then we can have a true discussion about quality and accountability. After all, the term “accountability” is derived from the French “compter”, requiring an inherent characteristic of measurement. Yet measurement without a road map would remain exploratory and miss its destination of responsible professionalism. It is perhaps because of this realization that the 2500 year old ethical principles “I swear by Apollo the physician . . .” have recently been revisited and updated. Indeed, in an unprecedented collaboration between the Lancet and Annals of Internal Medicine,5 a new “Charter of Medical Professionalism” has been published which picks up where Hippocrates left off. In addition to the social and ethical responsibilities of the physician, the Charter specifies the need for measurement, disclosure about performance, and more quantitative strategies towards accountability. As a gesture of true professionalism and timely self-evaluation, the Charter supports the notion that QI projects aiming at accountability need to incorporate epidemiological tools of counting, associating, and preventing undesirable processes. To do so, the performance of individuals and organizations should be continuously measured, not in a desire to reprimand or punish but to enhance and celebrate. Until performance measurement and improvement are seen as parallel tracks to the practice of medicine, there can be only research studies that may have much less ability to alter culture, mind set, and perceived responsibilities.

The paper by Freeman et al3 convincingly leads the way to such considerations and, hopefully, for further discussion.

Quality improvement projects need to become ongoing sustainable programs if they are to alter culture, mind set, and perceived responsibilities in the practice of medicine.


Linked Articles

  • Action points
    Tim Albert