Article Text

Download PDFPDF

When evidence says no: gynaecologists’ reasons for (not) recommending ineffective ovarian cancer screening
  1. Odette Wegwarth1,
  2. Nora Pashayan2
  1. 1 Center for Adaptive Rationality/Harding Center for Risk Literacy, Max-Planck-Institute for Human Development, Berlin, Germany
  2. 2 Institute of Epidemiology and Healthcare, Department of Applied Health Research, University College London, London, UK
  1. Correspondence to Dr Odette Wegwarth, Center for Adaptive Rationality/Harding Center for Risk Literacy, Max-Planck-Institute for Human Development, Berlin, Germany; wegwarth{at}mpib-berlin.mpg.de

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Most patients likely assume that physicians offer medical procedures backed by solid, scientific evidence that demonstrates their superiority—or at least non-inferiority—to alternative approaches.1 Doing otherwise would waste healthcare resources urgently needed elsewhere in the system and also would jeopardise patient health and safety as well as undermine patients’ trust in medicine2 and care. In some instances, however, physicians’ healthcare practices appear to act against scientific evidence.3–5 For example, evidence from two large randomised controlled trials6 7 on ovarian cancer screening’s effectiveness showed that the screening has no mortality benefits—neither cancer-specific nor overall—in average-risk women but considerable harms, including false-positive surgeries in women without ovarian cancer. Consequently, the US Preventive Services Task Force and medical associations worldwide recommend against ovarian cancer screening.8 Nevertheless, a considerable number of US gynaecologists persist in recommending the screening to average-risk women.9 To understand why physicians continue using a practice called into question by scientific evidence, we investigated gynaecologists’ reasons for or against recommending ovarian cancer screening, their assumptions about why other gynaecologists recommend it, and the association between their knowledge of basic concepts of cancer screening statistics10 and recommendation behaviour.

Methods

We surveyed a national sample of US outpatient gynaecologists stratified by the distribution of gender and years in practice of gynaecologists in the American Medical Association (AMA) Masterfile (table 1). The survey (see online supplementary materials) was part of a larger project on gynaecologists’ estimates and beliefs about ovarian cancer screening evidence. Detailed methods can be seen elsewhere.9 For analysis, we classified respondents into ‘screeners’ (those who recommend the screening to average-risk women) and ‘non-screeners’ (those who do not) and compared response proportions using a χ2 test. We performed logistic regression to investigate the associations between recommendation behaviour, knowledge of concepts of cancer screening statistics10 (measured by four specific questions, see table 2) and demographics. Wilcoxon signed-rank test was applied for comparing gynaecologists’ responses of their own reasons with their perceived reasons for their colleagues.

Supplemental material

Table 1

Distribution of demographic characteristics of the survey sample, compared with the AMA Masterfile for gender and years in practice

Table 2

US gynaecologists’ reasons for or against recommending ovarian cancer screening, their assumptions about why other gynaecologists recommend it, and their knowledge of concepts of cancer screening statistics

Results

Of 980 gynaecologists invited, 876 started the survey, 475 were excluded (inpatient care: 173, quota filled: 171, survey non-completion: 131) and 401 completed the survey (response rate, 63.1%; 401/(980–173–171)).

Screeners (n=231 (57.6%)) reported that their recommendations were most heavily influenced by patient expectations and fear of litigation, followed by their belief in the ability of screening to reduce disease-specific mortality and/or incidence (table 2). In contrast, non-screeners’ (n=170 (42.4%)) recommendations were mainly influenced by current guidelines and concerns about the harms of screening (eg, overdiagnosis)—aspects that played minor roles for screeners (p<0.001). Screeners further assumed that a larger proportion of colleagues recommend the screening than did non-screeners (Mean 42.2% (SD 25.5) vs 13.6% (13.1), p<0.001) and for similar reasons, with one exception: screeners believed their colleagues were more often influenced by financial interests than they themselves were (14.3% vs 3.5%, p<0.001). Screeners nevertheless thought financial interests to be the least likely reason for their colleagues to recommend the screening, whereas non-screeners considered financial interests to be the most relevant reason for colleagues to recommend it (14.3% vs 43.5%, p<0.001).

In univariate analysis, gynaecologists’ knowledge of concepts of cancer screening statistics was significantly associated with all listed reasons for recommending the screening (p<0.001) except fear of litigation and conflicts of interest. In adjusted multivariate analysis—accounting for fear of litigation and conflicts of interest, both already known to negatively influence physicians’ evidence-based advice11 12—we found that gynaecologists’ recommendations were independently associated with their knowledge of cancer screening statistics and fear of litigation, but not with conflicts of interests. With knowledge of cancer screening statistics being the strongest predictor, the odds of a gynaecologist both answering 75% or more of the statistical concept questions correctly and being a non-screener was nearly four times higher than that of a gynaecologist answering 50% or less of questions correctly (OR 3.58, 95% CI 2.28 to 5.65; p≤0.001) (regression table; see online supplementary materials).

Discussion

Gynaecologists who recommend ovarian cancer screening—which is neither justified by evidence nor supported by medical associations—followed some of the same reasoning (eg, fear of litigation) that has been observed in other clinical settings.13 Our study uncovers an additional mechanism behind this potentially harmful behaviour: misconceiving basic concepts of cancer screening statistics. Physicians who wrongly believe that detecting more cancers or a higher proportion of early-stage cancers proves that screening saves lives are more likely to overvalue screening’s benefits and undervalue its harms, and they are also more likely to struggle to accept contradicting evidence and to adequately address patients’ medically undue wishes or unwarranted fears. Believing that most colleagues act and think the same may further undermine essential progress towards practising evidence-based care.

The fact that our survey has been done with US gynaecologists may have influenced our results, and generalisability might thus be limited. Particularly the problem of malpractice litigation is a phenomenon mainly reported for the US healthcare system, with gynaecologists belonging to one of the disciplines most susceptible to it.14 With respect to knowledge of concepts of cancer screening statistics, however, previous studies in populations from various countries found that a considerable number of physicians are misled by relative as opposed to absolute risk formats,15–19 have difficulty calculating the positive predictive value of tests20–23 or perceive cancer screening statistics as challenging.10 24 This suggests that the issue is not restricted to our survey population but is instead universal.

Our survey therefore calls on the medical community to improve how medical statistics is taught and learnt in medical schools and continuing medical education. A critical number of statistically literate physicians will not resolve all healthcare problems, but will promote greater patient safety and more evidence-based care.

References

Footnotes

  • Contributors OW had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: OW. Obtaining funding: OW. Acquisition, analysis and interpretation of data: OW, NP. Drafting the manuscript: OW. Critical revision of the manuscript: OW, NP. Statistical expertise: OW, NP. Study supervision: OW, NP.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval The study was approved by the institutional review board of the Max-Planck-Institute for Human Development, Berlin (Germany).

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement All data relevant to the study are included in the article.