The scientific understanding of how people perceive and code risks and then use this information in decision making has progressed greatly in the last 20 years. There is considerable evidence that people employ simplifying heuristics in judgement and decision making. These heuristics may lead to bias in how people interpret information. However, much of our understanding of risk perception is based on laboratory studies. It is much less clear whether risk perception in the real world (as in the case of medical treatments) exhibits the same patterns and biases. This paper reviews the published literature on risk perception in patients who face substantial treatment risks. It examines how accurate patients' perception of risk is, what factors affect the perception of risk, and several possible explanations for why patients' risk perception is not always accurate.
Statistics from Altmetric.com
A considerable body of work within cognitive psychology has examined how people understand risks and how this information is used in decision making,12 and significant advances in our understanding of risk perception and decision making have been made. Much of this work describes how people's decision making is not strictly rational but rather is subject to systematic biases.12 The evidence suggests that people use short cuts when making decisions in order to simplify the decision making process and these short cuts can lead to biases. Such short cuts are referred to as heuristics. In this context, bias refers to systematic overestimation or underestimation which may arise as a result of a heuristic. This knowledge gained from cognitive psychology has important implications for medical decision making, how people understand risks, and the nature of informed consent.
This paper presents some of the main findings from the psychology literature on risk perception in conjunction with evidence from the clinical literature regarding patients' perception of risk. The review largely focuses on studies which report data regarding patients' actual perceptions of risk related to their treatment or disease. Studies which present hypothetical scenarios to people are not included, because patients' perceptions of the risks of their disease will be very different from those of participants in experiments using hypothetical events. Further discussion of this subject is provided in this supplement by Edwards and Elwyn.3
Heuristics and their effect on risk perception
People's interpretation of risk information is guided by heuristics.4 This has implications for how information is presented to patients and for gaining informed consent. The availability heuristic predicts that people judge an event as more likely or more probable if it is easily brought to mind.4 For example, Slovic et al4 found that survey respondents overestimated the frequency of rare causes of death (murder, car accidents) and underestimated the frequency of more common causes (such as stroke and stomach cancer). Overestimates may have occurred because they were dramatic or sensational. A more recent example of the availability heuristic can be seen in the huge public concerns over recent railway accidents in the UK which have actually caused relatively little loss of life. In contrast, thousands of people are killed in road accidents every year but appear to cause much less anxiety among the public or the media. The perception of risk is also affected by other factors which include immediacy of effect (whether the effect of the risk is perceived to be immediate or in the future), controllability (the extent to which people can exert any control over the risk), novelty (whether it is a new risk or an established risk), and catastrophic (phenomena are viewed as more risky if they lead to catastrophic consequences such as nuclear reactor meltdown compared with phenomena which may lead to an equal number of deaths but over a longer period of time).56 Slovic et al also identified other factors including natural versus man-made (man-made phenomena are viewed as having higher risk), overconfident experts (experts are overconfident regarding the accuracy of their judgements which causes them to underestimate the degree of error in their estimates), and anchoring (judgements tend to be anchored on initially presented values).4 The factors that affect risk perception have been determined through the analysis of large surveys of the general public (often undergraduate students). This “psychometric paradigm” underlies much of what we understand about risk perception. However, the risks that people face in these studies are largely hypothetical (such as the risk of nuclear reactor meltdown). Effects which have been described in the literature from the psychometric paradigm may well be very different if re-examined in people who face actual substantial risks such as the risks associated with a medical treatment. This paper reviews studies that have examined risk perception and decision making in patients who actually face substantial risks. The main experimental findings in risk perception from the psychometric paradigm are used as a framework for reviewing the clinical literature, and this review examines how well the findings from the clinical literature fit the predictions from the experimental data.
Weinstein has shown that people commonly view hazards as more risky for other people than for themselves.7 This means that people may be more likely to engage in a risky behaviour because they underestimate the risk associated with that behaviour. Avis et al8 examined people's perceptions of their risk of having a stroke or heart attack in the next 10 years; 57% rated their risk as lower than average while only 13% believed it was higher than average. Regression analyses revealed that participants based their estimates of heart attack risk on appropriate risk factors such as smoking, weight, and death of a parent from heart disease. Niknian et al9 also reported that people show a strong tendency to underestimate their personal risk of heart disease. Weinstein reported similar findings for estimates of the risk from food poisoning, influenza, and asthma.7
Categorical perception: dangerous or safe
There is evidence that people may treat risks on an extremely simple level, possibly coding risks as simply dangerous or safe.10 In the public health scare in the UK regarding the increased risk of venous thrombosis associated with the contraceptive pill, there seemed to be evidence that people simply reclassified the pill from safe to dangerous.11 There was little evidence that people were really considering the relatively small increase in absolute risk associated with the pill when making a decision to stop taking it. Calman pointed out that, in reality, the risks of pregnancy far outweighed the small increased risk associated with the contraceptive pill.11 The contraceptive pill scare forms an interesting contrast to cigarette smoking where the risks are much greater and yet reducing the use of tobacco has been much harder. Why the public appear to perceive the two risks so differently is unclear. It is possible that novelty, which has been shown to be a significant predictor of risk perception, could provide a partial explanation.56
The evidence for the categorical perception of risk is also supported by recent experimental studies that have examined how people use information in decision making.12–14 Reyna and Brainerd's fuzzy trace theory suggests that people simply extract the gist of any information and base their decision making upon this. Risks are coded qualitatively as small or large rather than 1%, 15%, etc, and Reyna and colleagues have presented a substantial body of evidence to support this theory. Mazur et al15 reported a survey of patients' preferences for doctor-patient communication which found that 44% of people preferred probabilities to be purely relayed in a qualitative sense (using terms like possible or probable) rather than as percentages, which in some ways supports the predictions of the fuzzy trace theory.
Uncertainty and trust
Johnson and Slovic16 found that, if risks are presented with a degree of uncertainty such as a range of possible values, this can improve people's understanding of risk. Interestingly, participants also felt that such information could be trusted more than information that contained a specific point estimate of risk. Frewer reported that information from distrusted sources is considered to be biased, whereas trusted sources are perceived to be more knowledgeable and more concerned with the public's welfare.17 Frewer indicated that governments are normally viewed as a distrusted source whereas doctors are considered to be a trusted source. This work suggests that future public health campaigns may be more effective if they were promoted solely by doctors rather than by government departments. Birungi reported that people in Uganda mistrust injections provided at government health institutions (because they generally mistrust the government). However, the use of injections in the country is widespread with people reporting that they prefer to seek medical help from people they know, but who are often untrained.18
It is well understood that the way in which risks are presented or framed can affect people's perceptions of them.101920 This has significant implications for communicating risks to patients. Patients' choice of treatment modality such as surgery or radiotherapy can be strongly influenced by whether the risks are presented in terms of survival data (e.g. 90% of people will survive the immediate postoperative period and 34% will survive 5 years) or mortality data (e.g. 10% of people will die as a result of the operation and 66% will be dead within 5 years). This effect should be considered by clinicians when they are counselling patients so that they avoid biasing patients' choices.
The effects of framing on presenting risk information has recently been reviewed by Edwards et al.21 The authors indicate that framing effects (which are consistently reported in laboratory studies) are not so reliably found in clinical studies of risk communication. They also conclude that more clinical studies are needed in this area. O'Connor22 reported significant framing effects when eliciting patients' preferences for cancer chemotherapy.
Perception and recall of risks
PERCEPTION OF QUANTITATIVE INFORMATION
Healthcare professionals use different formats for presenting risk information including absolute risk, relative risk, and number needed to treat. These different formats have been shown to affect how the risk information is interpreted and providers of information need to be aware of this.23 For example, Stone et al24 found that participants were willing to pay more for safe tyres when information was presented as a relative risk than when presented as an absolute risk. Skolbekken highlighted how pharmaceutical companies manipulated the presentation of risk information (by using relative risk reduction) to enhance the apparent effectiveness of cholesterol lowering drugs.25
Grimes and Snively26 examined whether risks were better understood when stated as rates (e.g. 8.9 per 1000 or 2.6 per 1000) or proportions (e.g. 1 in 112 or 1 in 384). Six hundred and thirty three women in a gynaecology outpatients clinic were asked to judge which risk was greater for each type of format—for example, which is the greater risk: 1 in 112 or 1 in 384? Risks expressed as rates were generally better understood than proportions. However, overall, participants showed poor understanding of both formats. Only 56% of people were able to correctly identify that 8.9 per 1000 was a higher risk than 2.6 per 1000 while 73% correctly identified which of the proportions was a higher risk. The study indicated that, overall, 36% of patients were unable to indicate which was the higher risk regardless of how the information was presented.
Woloshin et al27 compared different methods for eliciting women's perceptions of the risk of breast cancer. Actual breast cancer risks were estimated and compared with women's own perceptions of risks. The authors found that asking women to estimate their risk in “x in 1000” format greatly inflated their perception of the risk. In contrast, when women simply rated whether their risk was higher, lower, or about average, then their perceptions were found to be more closely matched to the estimates of actual risk.
RECALL OF RISK INFORMATION
Evidence suggests that clinical staff can be very poor at communicating risks to patients and patients can be very poor at recalling what they were told. Ellis et al28 reported that 38% of patients who were verbally counselled by their clinician couldn't recall what their diagnosis had been when questioned later.
More recently we have examined patients' ability to recall risk information they were given regarding carotid endarterectomy.29 Carotid endarterectomy (CEA) has been shown to significantly reduce patients' long term risk of stroke, but the operation itself carries a significant stroke risk. It is an interesting treatment to examine because the risks of this treatment and the risks of not undergoing surgery are well understood.3031 Seventy three patients on the waiting list for surgery were surveyed after seeing their vascular surgeon in order to determine their understanding of the risks of stroke as a result of surgery and their risks if they had decided not to go ahead with the operation (56 (77%) responded). The surgeons carefully explained the procedure and gave information to patients regarding the risk of CEA based on the unit's own surgical audit and the results of multicentre trials.3031 Patients' recall of the information they had been given was very poor, and only one could recall all of the risks that he had been told. Estimates of their stroke risk without surgery were hugely variable (range 22–100%, mean 57%, actual risk 22%) and were significantly overestimated. Patients' estimates of stroke risk due to endarterectomy were also inaccurate (range 0–65%, mean 10%, actual risk quoted 2%). Patients were re-surveyed on the day before their operation and their estimates of stroke risk due to endarterectomy were found to have increased threefold.
Fisk has considered why patients' estimates of risk may differ from those of the doctor who counselled them.32 Experts may well underestimate their own risk of error. Individuals may feel that they differ from the average patient (maybe because of perceived severity of symptoms or medical history). Experts may also be seen to play down the importance of areas of uncertainty—for example, while a surgeon may quote an operative stroke risk of 2%, it may be unknown why that 2% suffer a stroke.
UNDERSTANDING OF RISKS
In the present context, risk is considered to be the product of the probability of an outcome and the severity of that outcome. Clearly the understanding of both aspects of risks are crucial when patients are asked to make decisions about their treatment. Interestingly, much of the attention in the literature on risk perception has been concerned with how well people understand and can recall the numbers or probabilities associated with risks. Much less work has addressed people's understanding of the qualitative nature or severity of outcomes. Surgeons, for example, may be satisfied if their patients understand that they face a 2% risk of a heart attack. It is equally important, however, to determine whether people understand what a heart attack is and how it will affect their health and functional status/quality of life both now and in the future. One recent study has examined patients' understanding of heart failure using a qualitative approach.33 The authors concluded that there is “little public understanding of chronic heart failure”. More work is needed to examine the factors that affect people's understanding of the nature of risks as opposed to their probability.
Gattellari et al34 examined how well cancer patients understood the information they had been given. They found that 80% of patients who had been told by their doctor that there was no chance of cure reported that there was actually some chance of cure, and 15% reported that their chance of cure was at least 75%; 40% of patients did not understand whether the goal of treatment was curative, adjuvant, or palliative, and 44% overestimated the probability of treatment prolonging life. Regression analyses revealed that misunderstanding was predicted by denial rather than by factors associated with the actual communication process. One feature of denial is cognitive avoidance which describes how patients actively avoid information about their disease.34 This is considered adaptive because it may help to reduce the emotional impact of the disease. However, in the current context it suggests that, regardless of how well a risk communication strategy is developed, there may be patient specific factors that limit its effectiveness.
The studies suggest evidence that doctors and patients exhibit some of the biases in risk perception and decision making that have previously been reported from laboratory studies. The data indicate that many patients have poor comprehension and recall of risk information. The fuzzy trace theory predicts that it may be completely unrealistic to even expect patients to recall accurate risk information. While clinicians typically report risk information as percentages or relative risks, the evidence suggests that people may code information qualitatively. Decision making has been shown to be subject to bias as a result of heuristics. More recent evidence indicates that much of the information that people are presented with may not even be used in the decision making process.35
The movement towards shared decision making in health care places an important emphasis on the role of the patient in decision making.36 It is important for shared decision making programmes to encompass this information from psychology. Fischoff has highlighted how simply providing accurate information in an understandable format does not necessarily improve communication of risk.37 The design of tools needs to be guided by an understanding of how people understand risk and benefit information, how information is weighted or ordered (or even if it is), and how decision making processes work.
This review has been restricted to presenting information from clinical studies in order to examine whether effects that have been reported from the laboratory are also found in the field. There is some evidence that this is not always the case.20 It certainly should not be assumed that factors that affect decision making in experiments will exert the same effect in clinical scenarios. It is also far from unclear what the best method for assessing risk perception is. If we don't know how people understand and code risks, then it is very difficult to measure risk perception. Surveys or questionnaires are commonly used, but these constrain the way in which people can respond and so make assumptions about how people understand risks. For example, our study in Leicester asked people to give their percentage risk of suffering a stroke as a result of surgery. If the risks are coded qualitatively, this may not have been the most appropriate format. It is also difficult (especially in field studies) to be able to measure risk perception at the point decisions are made or when information is given. If it is measured after the event then we are actually measuring people's perception and recall of risk.
Experimental evidence suggests that people use simplifying heuristics in risk perception and decision making.
Despite the large amount of evidence from experimental studies, there is relatively little clinical evidence concerning patients' perception of risk.
Evidence indicates that patients often have a very poor understanding of quantitative risk information. Very little research has examined how well people understand the qualitative nature of risks.
Evidence also suggests that the experimental findings from the laboratory do not naturally translate to clinical scenarios.
A better understanding of how people code risk information and use that information in decision making is needed to improve the communication of risk to patients.
More experimental work in clinical settings is needed.
HERU is supported by the Chief Scientist Office of the Scottish Executive Health Department. The views expressed in this paper are those of the author and not necessarily those of the Scottish Executive Health Department.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.