Article Text

Download PDFPDF

Beyond polypharmacy to the brave new world of minimum datasets and artificial intelligence: thumbing a nose to Henry
  1. Adam Todd1,2,
  2. Barbara Hanratty2,3
  1. 1 School of Pharmacy, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  2. 2 Patient Safety Research Collaboration, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  3. 3 Population Health Science Institute, Newcastle University, Newcastle upon Tyne, Tyne and Wear, UK
  1. Correspondence to Professor Adam Todd; adam.todd{at}ncl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Dealing with uncertainty is an inherent part of scientific discovery. One of the ways in which scientists have tried to overcome uncertainty is through the concept of measurement—defined as the act or the process of finding the size, quantity or degree of something.1 Over centuries, standardised and consistent measurement systems have assumed fundamental importance in societies in parallel with the increasing dominance of the scientific paradigm. The 11th century proposal by Henry I of England to standardise the measurement of a yard as the distance from his nose to outstretched thumb2 may appear egotistical and humorous by modern standards. But the reader does not need to stray far outside of the scientific paradigm to appreciate the range of philosophical standpoints on the nature of measurement and the ongoing debate over the concept of measurable quantity.3 4 Examples of outcomes that cannot be accurately measured or quantified are easy to find in human biology and medicine. Human consciousness, for example, encompasses thoughts, memories and feelings but is not something that can be easily measured, explained or understood. In this issue of BMJ Quality and Safety,5 we have a study that highlights the complexity and challenges of measuring or quantifying polypharmacy—the concomitant use of multiple medications. This apparently simple concept poses major challenges for researchers, healthcare professionals and policy makers in how to define, measure and operationalise it. Here, Wabe and colleagues5 show that polypharmacy rates (defined in this study as ≥9 concurrent medicines) are highly variable depending on several different factors: (1) the type of data used in assessment (prescribing or administration); (2) the time periods used to capture the data and (3) the inclusion criteria for the medications used to calculate polypharmacy. The work shows that, in a cohort of older people based in residential aged care facilities in Australia, the prevalence of polypharmacy could vary by up to 30%, ranging from 33.9% to 63.5% of residents—depending on how it was measured. The study did not attempt to measure the appropriateness of the medications in the cohort, but the authors argue that if polypharmacy is to be used as a national quality indicator for prescribing, there should be a consistent approach to measurement and guidance on how to calculate and interpret the findings.

The debate about the measurement of polypharmacy is not new: there has been much discussion over the years about how to think about polypharmacy and how best to measure it. A systematic review by Masnoon and colleagues,6 published in 2017, described the highly variable nature of polypharmacy definitions used in the literature. They found a total of 138 definitions—111 of these were related to the number of medications, 15 were numerical incorporating duration of therapy or healthcare setting and 12 were descriptive definitions (eg, simultaneous and long-term use of different medications by the same individual). The most commonly used numerical definition to represent polypharmacy is the regular use of five or more medications7—but given the increasing reliance on medicines use, the term ‘hyperpolypharmacy’ was developed to represent the use of 10 or more medications.8 For certain disease states, there have been further definitions developed to conceptualise polypharmacy; for example, ‘super hyperpolypharmacy’ for people with heart failure, representing the use of 15 or more medications.9 The challenge in this context is that, according to international treatment guidance, a 4-pillar treatment approach should be used to manage people with heart failure with reduced ejection fraction—(i) an angiotensin‐converting enzyme inhibitor, or an angiotensin receptor blocker or an angiotensin receptor–neprilysin inhibitor, (ii) a β‐blocker, (iii) a mineralocorticoid receptor antagonist and (iv) a sodium‐glucose cotransporter 2 inhibitor. If a loop diuretic is added to treatment for symptom control, and polypharmacy is conceptualised as the regular use of five or more medications, most people with heart failure will meet this criterion. For example, Cobretti and colleagues10 assessed medication complexity in older people with heart failure and showed that only one patient out of the study cohort of 145 patients was taking fewer than five medications. It is not practical to keep adding new definitions of polypharmacy to keep up to date with treatment guidance for different disease states. What next? Mega super hyperpolypharmacy?

Rather than generating yet more ways to refine the definition or measurement of polypharmacy, perhaps it is time to move beyond the concept completely. The inherent complexity of polypharmacy is not well served by simplistic and arbitrary measures. Most measures of polypharmacy fail to take into account individual factors that have the potential to influence medication use. Examples include gender (patterns of multiple long-term conditions differ between men and women, with women more likely to experience morbidity-related conditions and men more likely to experience mortality-related conditions11), socioeconomic position (medication use has been shown to vary by individual-level or area-level deprivation12) or ethnicity (minority ethnic groups potentially have a higher risk of developing certain multiple long-term conditions13). Genetic make-up may also be an important factor, with some people more likely to experience treatment failure, adverse drug reactions or drug–drug reactions from their medicines. There is also the added complexity of the intersection of these factors and how this could impact medication use (eg, the intersection of gender, socioeconomic position and ethnicity). In these contexts, considering polypharmacy in a binary way of being ‘bad’ or ‘good’ is unhelpful—whether that be assessing prescribing practice or markers of care in a policy context. A basic measure of polypharmacy may be useful from a research perspective, where large datasets are used to assess prescribing patterns over time, but from a practice and policy perspective, it is time to question the utility of the concept.

If we move beyond the concept of polypharmacy but still want to assess prescribing practices and medicine safety, we are left with the question of how best to do that. One potential starting point is to think about the data that can be used to assess prescribing appropriateness and medication risk. In their study, Wabe et al demonstrated that access to data on medicines administered, instead of prescribing data, can make a significant difference to the assessment of polypharmacy. For example, in the UK, we have some of the most robust data in the world on prescribing in community settings,14 but almost no timely information on medications dispensed, and even less understanding of what happens after the patient takes the medication home. Linkage of data from different sources would provide some answers, but the ethical and governance barriers for researchers are often overwhelming. Recent work to inform the development of a national minimum dataset for care homes in England aimed to generate a core set of information about individual residents, drawn from different routine data sources.15 The DACHA (Developing research resources And minimum data set for Care Homes' Adoption and use) study has established the value of, and support for, a minimum data set in the UK, but despite having the expertise, time and permissions in place, the barriers to linking primary care data were immense.16 Sharing routine data across health and care settings has the potential to revolutionise the assessment of prescribing appropriateness and reduce medication-related harm. This is particularly important when an older person moves in or out of hospital or is discharged to a care home, for example.17 Information on high-risk drug prescribing, therapeutic drug monitoring, adverse drug reactions and medication reviews are all potentially critical to patient safety. But how much more powerful would this information be if supplemented with data on mobility, or changes in functional ability and recent transitions between care settings, for example? Access to population-level data of this sort would support the development of systems to alert care professionals to the ‘tipping points’ when prescribing medication is likely to cause harm to a patient based on their individual circumstances. There has already been a great deal of progress in this field in terms of using routinely collected data to predict outcomes for individual patients. For example, Calero-Díaz and colleagues, as part of the AI-MULTIPLY consortium, have used artificial intelligence—specifically deep learning approaches—to predict the likelihood of hospital readmission for people with multimorbidity who had recently been admitted to the hospital.18 Using a wide range of subject data, including patient demographics, prescribing information, admission information and long-term conditions, the models were able to show high power for predicting hospital readmission. This example relates only to hospital readmission, but future research will develop models to predict a range of outcomes based on different medicines and prescribing scenarios. To fully embrace the possibility of being able to predict risks around medicines use, it is essential that data are made available to support this process. The work by Wade and colleagues expertly highlights the challenges and complexities of trying to measure polypharmacy in a standardised way. To move forward with this debate, focusing our efforts on developing nationally agreed minimum datasets across health and care systems may be a far more effective use of time and resources rather than trying to decide the most appropriate way to measure or quantify polypharmacy.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • X @adamtodd138

  • Collaborators Not applicable.

  • Contributors AT had the idea for the editorial with discussion and input from BH. AT led the drafting of the manuscript with input and revisions from BH. AT is the guarantor.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles