Several methods to investigate relative attribute impact in stated preference experiments
Introduction
A common objective of discrete choice experiments (DCEs) is to compare the relative impact of attributes of the product or program under investigation. For example, is test accuracy relatively more important to patients than time spent waiting for results when choosing diagnostic tests? Most studies compare relative impacts of attributes by comparing the size and significance of estimated parameters for attributes of interest. Unfortunately, these parameters are not directly comparable because the attribute parameter estimates in discrete choice models (DCMs) are confounded with the underlying subjective utility scale. That is, parameter estimates combine the relative impact or importance of an attribute and the utility scale values associated with its levels. Thus, utility estimates for attribute levels cannot be interpreted as indicating relative importance of an attribute.
In particular, the estimated utility of each attribute level is measured on an interval scale, but the origins and units of each attribute's utility scale differ. Apart from obvious differences in underlying physical attribute units like price in dollars, time in minutes/hours etc., qualitative attributes have no physical referents. For example, attribute levels for “provider of care” might be nurse, doctor, etc. Thus, distances between the levels of different attributes need not have the same meaning. So, utility scale locations, or utility differences between levels of different attributes, generally do not have equal scale units. One can equate the origins of each scale, but not the scale units; hence, direct comparisons of ranges of utility estimates are meaningless without transforming them in a theoretically acceptable way, or modifying a choice experiment. Put simply, one cannot determine whether the magnitudes of the parameter estimates for an attribute's levels, and hence the resulting range of parameter estimates for these levels, are due to the “impact” of that attribute or the position of each attribute level on the underlying utility scale. To assess relative attribute impacts one needs to measure each on a common, comparable scale.
The purpose of this paper is to focus attention on the confound between attribute impact and attribute level scale utilities in DCEs, and to outline and discuss five ways to compare relative attribute impacts: (1) partial log-likelihood analysis; (2) marginal rates of substitution (MRS); (3) Hicksian welfare measures; (4) probability analysis; and (5) best worst attribute scaling (BWAS). The first four methods deal with the issue of relative attribute impact within a traditional DCE. We demonstrate these in an empirical application, which to our knowledge is the first health-related DCE to include two-way attribute interactions in a non-linear indirect utility function (IUF). The BWAS method is a modified DCE.
The rest of the paper is organised as follows. The next section discusses the theoretical background for the confound between attribute impact and level scale. The third section outlines a menu of five methods to investigate the relative impact of attributes that are illustrated in two empirical applications in the fourth section. The fifth section discusses advantages and disadvantages of each method and circumstances in which each may be appropriate. The final section concludes.
Section snippets
Confound between attribute impact and scale
Attribute parameters estimated in choice experiments combine the impact of an attribute and the underlying latent utility scale on which its levels are measured. This “confound” of impact and scale has long been recognised in utility theory and psychology (Anderson, 1970; Keeney & Raiffa, 1976; Louviere, 1988b; Lynch, 1985), but is less widely recognised by those who apply conjoint elicitation procedures (see McIntosh & Louviere, 2002 for an exception). The following issues relate to the
Methods to investigate relative impact of attributes
We outline five methods that place attributes on common and commensurable scales.
Empirical applications
This section presents two empirical studies. The first demonstrates the first four methods outlined above in the context of a choice experiment and the second illustrates BWAS.
Discussion
We outlined and illustrated five ways to measure relative attribute impacts in stated preference studies. Some of these methods, or variations of them, have been used in the health economics literature, although not for the purpose of this paper. Some, such as the Hicksian CV and BWAS, only recently were introduced to health economics (see Lancsar & Savage, 2004 for the former and Flynn et al., 2007; McIntosh & Louviere, 2002 for the latter). Others, such as probability analysis and MRS
Conclusion
We discussed the fact that despite common practice, relative attribute impacts in DCEs cannot be inferred directly from parameter estimates due to confounds between the attribute impacts and utility scales on which attribute levels are positioned. We presented a menu of five methods that can be used to compare relative attribute impacts: partial log-likelihood analysis; MRS in the context of non-linear models; Hicksian welfare measures; probability analysis; and BWAS. The first four methods
Acknowledgments
The authors benefited from discussions with Tony Marley on expanding Best-Worst choices and from a discussion of an earlier version of this paper by Verity Watson at the July 2005 HESG meeting. We also gratefully acknowledge the support of the Australian Research Council, Grant number DP0343632, entitled “Modelling the Choices of Individuals.” Emily Lancsar is funded by the Health Foundation and an Overseas Research Scholarship. Terry Flynn is funded by the MRC Health Services Research
References (38)
- et al.
Optimal designs for choice experiments with asymmetric attributes
Journal of Statistical Planning and Inference
(2005) - et al.
Best–worst scaling: What it can do for health care and how to do it
Journal of Health Economics
(2007) - et al.
Some probabilistic models of best, worse and best–worst choices
Journal of Mathematical Psychology
(2005) Using conjoint analysis to take account of patient preferences and go beyond health outcomes: An application to in vitro fertilisation
Social Science & Medicine
(1999)Eliciting GPs’ preferences for pecuniary and non-pecuniary job characteristics
Journal of Health Economics
(2001)- et al.
Quick and easy choice sets: Constructing optimal and nearly optimal states choice experiments
International Journal of Research in Marketing
(2005) Functional measurement and psychophysical judgement
Psychological Review
(1970)Methods of information integration theory
(1982)- et al.
Preferences for aspects of a dermatology consultation
British Journal of Dermatology
(2006) - Cohen, S. (2003). Maximum difference scaling: improved measures of importance and preference for segmentation. In...
The determinants of conventions site selection: A logistic choice model from experimental data
Journal of Travel Research
Determining the appropriate response to evidence of public concern: The case of food safety
Journal of Public Policy and Marketing
Belief, attitude, intention and behavior: An introduction to theory and research
Analysing public preferences for cancer screening programmes
Health Economics
Using stated preference discrete choice modelling to evaluate the introduction of varicella vaccination
Health Economics
Decisions with multiple objectives: Preferences and value tradeoffs
Cited by (175)
Beach user perspectives on the upscaling of sand nourishments in response to sea level rise – A discrete choice experiment
2024, Ocean and Coastal ManagementCaffeine warning labels may increase young adults' intention to purchase energy drinks
2023, Food Quality and PreferenceAlternative adaptation scenarios towards pesticide-free urban green spaces: Welfare implication for French citizens
2022, Environmental Science and Policy