기존의 연구들은 속성중요도(attribute importance)가 측정 방법에 따라 그 결과가 달라진다는 것을 보여주었다. 특히, 소비자들이 속성중요도를 직접적으로 평가하는 주관적(subjective) 속성중요도와 다속성으로 구성된 제품에 대한 소비자의 선호에서 속성 가중치를 통계적 방법으로 측정하는 추정된(estimated) 속성중요도 사이의 상관관계가 높지 않다는 결과들이 많이 보고되어왔다. 본 연구는 이러한 불일치의 중요한 이유가 속성중요도를 추정하기위해 사용되는 선호도 평가과업(evaluation task)의 특성 때문이라는 것을 제시하고자 한다. 다속성제품의 선호도 측정을 위해서 많이 쓰이는 평점(rating)과 순위(ranking) 과업은 평가척도(response scale)와 평가모드(response mode)에서 차이를 보이며, 이러한 과업의 특징으로 인해 추정된 속성중요도가 주관적속성중요도와 차이를 보이게 된다. 본 연구의 결과는 분리평가(separate evaluation) 모드에 의존하는 평점 과업의 경우, 단독으로 평가가 어려운 비교속성(comparable attribute)에서 주관적 속성중요도와 불일치성이 크다는 것을 보여주었다. 반면에 순위 과업에 의해 추정된 속성중요도의 경우, 가장 중요한 속성에 실제보다 더 많은 가중치를 부여하는 현저성(prominence) 효과로 인해 주관적 속성중요도와의 일치성이 낮아진다는 것을 보여주었다. 이러한 결과들은 평점과 순위 과업 모두 다른 유형의 편향으로 인해 추정된 속성중요도가 주관적 속성중요도간의 낮은 일치성을 유발한다는 것을 의미한다. 또한 본 연구는 평점 과업과 동일한 평가척도를 사용하지만 순위 과업과 유사한 공동평가 모드를 사용하는 "공동평점(joint rating)" 과업이 이들 두 과업보다, 주관적 속성중요도를 더 잘 반영해줄 수 있다는 점을 제시하였다. 공동평점 과업의 경우 현저성 효과나 속성 평가가능성 효과가 크게 나타나지 않았으며, 이로 인해 다른 두 과업보다 주관적 속성중요도와 더 좋은 일치성을 보여주었다.
A number of methods have been proposed to assess consumer`s attribute importance. From a theoretical perspective, attribute importance is an integral component in explaining consumer attitude formation and the decision making process. For example, multiattribute judgment models posit that the evaluation of a product option is a function of attribute values and attribute importance using some linear combination rules. The accurate assessment of attribute importance has tremendous practical importance. Managers often rely on consumer`s expressed attribute importance when making strategic decisions on many key issues such as market segmentation and product configuration development. The attribute importance measures are largely classified as subjective and indirectly estimated methods. The subjective methods use self-report approaches to assess consumers` attribute importance. The indirect methods, on the other hand, derive attribute importance statistically from consumer`s expressed preference for product options, which are often elicited by different types of evaluation tasks (e.g., rating or ranking). Despite the theoretical connections between the direct and indirect measures, prior studies, however, have shown only a week to moderate level of convergence between the two approaches. The substantial divergence among different measurement approaches is problematic given the ubiquitous interchangeable use of direct and indirect methods in marketing research and practice. The current research suggests that the type of response task used to elicit preference in deriving indirect attribute importance (e.g., ranking or rating) is an important factor affecting the degree of correspondence between the subjective and indirectly estimated attribute importance. We propose that response tasks differ in their underlying characteristics, and each characteristic is associated with different type of weighting bias. Specifically, responses tasks differ in the response scale and response mode. Response scale refers to whether the underlying processing mode evoked by the given task is qualitative or quantitative in nature. Response mode refers to whether the preference is constructed based on separate or joint evaluation modes. We examine this issue through the comparison of the two most frequently used preference elicitation methods in indirect attribute importance measures: rating task and ranking task. With respect to the response scale, the rating task involves the evaluation of alternatives by assigning numbers along the given scale. Thus, the rating task can be defined as a quantitative scale. The ranking task, on the other hand, is based on qualitative scale since it encourages process of sequential arrangement of presented alternatives based on given criteria. The ranking task which relies on qualitative reasoning, in turn, leads to disproportionately great weighting of the most important attribute (i.e., the prominence effect) because a qualitative task evokes more qualitative or ordinal reasoning such as a lexicographic decision rule. Conversely, the conventional rating task renders quantitative weighting of attributes and hence is relatively free of the prominence bias. Therefore, we hypothesize that the prominence effect (i.e., a great divergence with direct weights in the most important attribute) is observed only for the ranking-based indirect attribute importance and not for the rating-based indirect attribute importance (H1). In terms of response mode, a conventional rating task resorts to the separate evaluation mode since the task calls for minimal comparisons of the target option to other options. Ranking, on the other hand, relies on the joint evaluation mode since the task requires comparisons among available alternatives. Due to this joint versus separate evaluation distinction, the two tasks differ in the types of attributes for which they can provide relatively accurate measurements. Specifically, indirect attribute importance derived using a rating task is affected by the evaluability of the attributes. That is, the rating task can provide relatively accurate assessment only for the easy-to-evaluate-alone attributes (i.e., enriched attributes), but not for those attributes that are difficult to evaluate independently (i.e., comparable attributes). Thus, the convergence of rating-based indirect attribute importance with the direct attribute importance will be higher for the enriched (vs. comparable) attributes. Conversely, the convergence of ranking-based indirect attribute importance with directly assessed attribute importance does not depend on the evaluability of attributes (H2). These two hypotheses are tested in study 1 and study 2. In both studies, the indirect importance was derived from respondents` preferences among sixteen hypothetical product profiles assessed based on either rating or ranking task. The subjective attribute importance was assessed by self-report direct rating method which is known to be the most reliable subjective measure. The results of Study 1 showed that the indirect attribute importance derived using both rating and ranking tasks shows substantial divergence with subjective attribute importance. However, the source of the divergence differed by the type of task. The ranking-based indirect attribute importance diverged from the subjective attribute importance primarily due to its greater weighting of the most important attribute (i.e., prominence effect). However, the prominence effect did not occur for the rating task-based indirect attribute importance. Instead the overall divergence between the rating-based indirect attribute importance and subjective attribute importance results mainly from the divergent weighing of comparable attributes. That is, when the indirect attribute importance was assessed using the rating task, the level of correspondence with the subjective attribute importance was low for the comparable attributes, but not for the enriched attributes. As expected, ranking task that relies on joint evaluation mode did not suffer from the attribute evaluability effect. Study 2 attempted to replicate the findings of study 1 and more importantly, suggests a joint rating task as a means to improve the convergence with subjective attribute importance. The joint rating task is similar in the rating task in terms of the physical scales it involves, but it relies on the joint evaluation mode. Thus, we expect that the biases associated with the rating and the ranking tasks will be reduced with the joint rating tasks, leading to the enhanced overall convergence with subjective attribute importance. In line with our expectation, the results show improved overall convergence with the subjective attribute importance. Further, the joint rating-based indirect attribute importance was less vulnerable to both prominence effect and the attribute evaluability effect, showing improved level of convergence for the most important attributes and the comparable attributes. In sum, the findings of our study suggest important implications for marketers attempting to assess the consumers` attribute importance using indirect methods. The results of our study demonstrate the major weaknesses of the rating and the ranking tasks, the two most frequently used attribute importance measures. Both rating and ranking evaluation tasks differing in processing goal and processing mode are associated with different advantages and disadvantages in reflecting the individual`s underlying preference. Instead, we suggest the joint rating task as an alternative way to derive consumer`s attribute importance since this joint rating takes on the advantages of the both rating and ranking measures.