Introduction and background
"Globalization of business highlights the need to understand the management of organizations that span different nations and cultures" (Srite et al., 2003, p. 31). In these multinational and transcultural organizations, there is a growing call for utilizing information technology (IT) to achieve efficiencies, coordination, and communication. However, cultural differences between countries may have an impact on the effectiveness and efficiency of IT deployment. Despite its importance, the effect of cultural factors has received limited attention from information systems' (IS) researchers. In a review of cross-cultural research specifically focused on the MIS area (Evaristo, Karahanna, & Srite, 2000), a very limited number of studies were found that could be classified as cross-cultural. Additionally, even though many of the studies found provided useful insights, raised interesting questions, and generally contributed toward the advancement of the state of the art in its field, with few exceptions, no study specifically addressed equivalency issues central to measurement in cross-cultural research. It is this methodological issue of equivalency that is the focus of this article.
methodological issues
Methodological considerations are of the utmost importance to cross-cultural studies, because valid comparisons require cross-culturally equivalent research instruments, data collection procedures, research sites, and respondents. Ensuring equivalency is an essential element of cross-cultural studies and is necessary to avoid confounds and contaminating effects of various extraneous elements.
Cross-cultural research has some unique methodological idiosyncrasies that are not pertinent to intracultural research. One characteristic that typifies cross-cultural studies is their comparative nature, i.e., they involve a comparison across two or more separate cultures on a focal phenomenon. Any observed differences across cultures give rise to many alternative explanations. Particularly when results are different than expected (e.g., no statistical significance, factor analysis items do not load as expected, or reliability assessment is low), researchers may question whether results are true differences due to culture or merely measurement artifacts (Mullen, 1995).
Methodological considerations in carrying out cross-cultural research attempt to rule out alternative explanations for these differences and enhance the interpretability of results (van de Vijver & Leung, 1997). Clearly, the choice and appropriateness of the methodology can make a difference in any research endeavor. In cross-cultural research, however, one could go to the extreme of classifying this as one of the most critical decisions. In this section, we briefly review such cross-cultural methodological considerations. Specifically, this section will address equivalence (Hui & Triandis, 1985; Poortinga, 1989; Mullen, 1995) and bias (Poortinga & van de Vijver, 1987; van de Vijver & Leung, 1997; van de Vijver & Poortinga, 1997) as key methodological concerns inherent in cross-cultural research. Then, sampling, wording, and translation are discussed as important means of overcoming some identified biases.
Equivalence
Achieving cross-cultural equivalence is an essential prerequisite in ensuring valid cross-cultural comparisons. Equivalence cannot be assumed a priori. Each cross-cultural study needs to establish cross-cultural equivalence. As such, equivalence has been extensively discussed in cross-cultural research, albeit using different terms to describe the phenomenon (Mullen, 1995; Poortinga, 1989).
To alleviate confusion created by the multiplicity of concepts and terms used to describe different but somewhat overlapping aspects of equivalence, Hui and Triandis (1985) integrated prior research into a summary framework that consists of four levels of equivalence: conceptual/functional equivalence, equivalence in construct operationalization, item equivalence, and scalar equivalence. Even though each level of equivalence is a prerequisite for the subsequent levels, in practice, the distinction between adjacent levels of equivalence often becomes blurry. Nonetheless, the objective in cross-cultural research is to achieve all four types of equivalence. Hui and Triandis' (1985) four levels of equivalence are discussed as follows:
1. Conceptual/functional equivalence is the first require -ment for cross-cultural comparisons and refers to whether a given construct has similar meaning across cultures. Furthermore, to be functionally equivalent, the construct should be embedded in the same nomological network of antecedents, consequents, and correlates across cultures. For instance, workers from different cultures may rate "supervisor is considerate" as a very important characteristic; however, the meaning of "considerate" may vary considerably across cultures (Hoecklin, 1994).
2. Equivalence in construct operationalization refers to whether a construct is manifested and operationalized the same way across cultures. Not only should the construct be operationalized using the same procedure across cultures, but the operationalization should also be equally meaningful.
3. Item equivalence refers to whether identical instruments are used to measure the constructs across cultures. This is necessary if the cultures are to be numerically compared.
4. Scalar equivalence (or full score comparability; see van de Vijver and Leung, 1997) occurs if the instrument has achieved all prior levels of equivalence, and the construct is measured on the same metric. This implies that "a numerical value on the scale refers to same degree, intensity, or magnitude of the construct regardless of the population of which the respondent is a member" (Hui & Triandis, 1985, p. 135).
Bias: Sources, Detection, and prevention
To achieve equivalence, one has to first identify and understand factors that may introduce biases in cross-cultural comparisons. Van de Vijner and Poortinga (1997) described three different types of biases: construct bias, method bias, and item bias:
1. Construct bias occurs when a construct measured is not equivalent across cultures both at a conceptual level and at an operational level. This can result from different definitions of the construct across cultures, lack of overlap in the behaviors associated with a construct [e.g., behaviors associated with being a good son or daughter (filial piety) vary across cultures], poor sampling of relevant behaviors to be represented by items on instruments, and incomplete coverage of the construct (van de Vijver & Leung, 1997). Construct bias can lead to lack of conceptual/functional equivalence and lack of equivalence in construct operationaliza-tion.
2. Method bias refers to bias in the scores on an instrument that can arise from characteristics of an instrument or its administration (van de Vijver & Leung, 1997), which results in subjects across cultures not responding to measurement scales in the same manner (Mullen, 1995). Method bias gives rise to concerns about the internal validity of the study. One source of method bias is sample inequivalency in terms of demographics, educational experience, organizational position, etc. Other method bias concerns relate to differential social desirability of responses (Ross & Mirowsky, 1984) and inconsistent scoring across populations (termed "selection-instrumentation effects" by Cook and Campbell, 1979, p. 53). For instance, on Likert scales, Koreans tend to avoid extremes and prefer to respond using the midpoints on the scales (Lee & Green, 1991), while Hispanics tend to choose extremes (Hui & Triandis, 1985). Differential scoring methods may also arise if respondents from a particular culture or country are not familiar with the type of instrument being used.
3. Item bias refers to measurement artifacts. These can arise from poor item translation, complex wording of items, or items inappropriate for a cultural context. Consequently, item bias is best prevented through careful attention to these issues. Like method bias, item bias can influence conceptual/functional equivalence, equivalence of operationalization, and item equivalence.
Table 1 presents a summary of how the three types of bias can be prevented or detected. The next section discusses three important methods of bias prevention: sampling, wording, and translation. This article concludes by presenting a set of cross-cultural methodological guidelines derived by a committee of international scholars.
sampling
Sampling decisions in cross-cultural studies involve two distinct levels: sampling of cultures and sampling of subjects (van de Vijver & Leung, 1997). Sampling of cultures involves decisions associated with selecting the cultures to be compared in the study. Many studies involve a convenience sample of cultures, typically ones where the researcher has preestablished contacts. Even though this strategy reduces the considerable costs of conducting cross-cultural research, it may hinder interpretability of results, particularly when no differences are observed across cultures (van de Vijver & Leung, 1997). Systematic sampling of cultures, on the other hand, identifies cultures based on theoretical considerations.
Table 1. Types of bias, prevention, and detection
| Detection | Prevention |
Construct bias (focus: constructs) | • Informants describe construct and associated behaviors • Factor analysis • Multidimensional scaling • Simultaneous confirmatory factor analysis in several populations • Comparison of correlation matrices • Nomological network | • Informants describe construct and associated behaviors in each culture |
Method bias (focus: administration procedures) | • Repeated administration of instrument • Method triangulation • Monomethod-multitrait matrix | • Sampling (matching, statistical controls) • Identical physical conditions of administering the instrument • Unambiguous communication between interviewer and interviewee • Ensured familiarity with the stimuli used in the study |
Item bias (focus: operationalization) | • Analysis of variance • Item response theory • Delta plots • Standardized p-difference • Mantel-Haenszel procedure • Alternating least squares optimal scaling • Multiple group LISREL | • Wording • Translation |
Typically, this involves selecting cultures that are at different points along a theoretical continuum, such as a cultural dimension. Random sampling of cultures involves selection of a large number of cultures randomly and allows for wider generalizability of results.
Most cross-cultural studies discussing sampling considerations, however, refer to sampling of subjects. Ensuring sample equivalency is an important methodological consideration in cross-cultural research, and it refers to the inclusion of subjects that are similar on demographic, educational, and socioeconomic characteristics. Sample equivalency can be achieved by either matching subjects across groups based on these background variables or statistically controlling for the differences by including such demographic variables as covariates in the cross-cultural comparisons (van de Vijver & Leung, 1997).
Wording and Translation
This is one of the key problems in cross-cultural methods, because in most cases, different cultures also have differ ent languages. Even in cases when subjects from different countries are conversant with English, they may miss the nuances of the intended meanings in questionnaire items (e.g., British, Canadian, and American English all have unique terms). Researchers should ensure that measurement instruments keep the same meanings after translation. Moreover, a given latent construct should be measured by the same questionnaire items in different populations. In fact, researchers such as Irvine and Carrol (1980) made a convincing case for using factor-matching procedures to test for invariance of factor structures across groups before any quantitative analysis is performed.
To translate correctly, there is a need to translate to the target language. This needs to be performed by a native speaker of the target language. Then, the text must be back-translated to the original language, this time by a different native speaker of the original language. Brislin (1986) provided fairly complete guidelines for this process.
Implications
Judging by the issues described above, achieving cross-cultural equivalence is straightforward. However, it is also clear that many precautions can be taken to prevent construct, method, and item bias and thus increase the level of equivalence. These range from sampling, wording, and translation, to careful attention to administration procedures across cultures. A number of guidelines for cross-cultural research have been put forth by an international committee of scholars (Hambleton, 1994; van de Vijver & Hambleton, 1996). Even though the primary focus of these is on research on psychological and educational issues, these guidelines easily generalize to MIS research.
In addition to prevention, various statistical tools can assist in the detection of the various types of biases. In summary, similar patterns of functional relationships among variables need to be shown (Triandis, 1976). Moreover, multimethod measurement can help us to avoid the confound between the interaction of the method and groups studied and is unlikely to share the same statistical underlying assumptions, or even require strong conceptualization ability (Hui & Triandis, 1985). This idea is similar to the notions of multiple operationism and conceptual replication (Campbell & Fiske 1959). Hui and Triandis (1985) claimed that this may not be as difficult as one may think, as long as there is prior planning of research. As an example, Hui and Triandis (1985) mentioned that an instrument may be improved by proper translation techniques, and:
... then establish conceptual/functional equivalence as well as instrument equivalence by the nomological network method and by examination of internal structure congruence. After that, the response pattern method and regressional methods can be used to test item equivalence and scalar equivalence. (p. 149)
The major implication of methodological problems is complications in making valid inferences from cross-cultural data. Clearly, there are many problems with correctly inferring from data in a cross-cultural research project and attributing results to true cross-cultural differences. To do so, alternative explanations need to be ruled out. Establishing (and not merely assuming) the four levels of cross-cultural equivalence previously discussed in this article is a major step in this direction.
future directions and conclusion
Initial attempts at reviews of cross-cultural research in MIS (Evaristo, Karahanna, & Srite, 2000) show that, for the most part, MIS studies have refrained from testing theories across cultures, and when comparisons are made, they are often post hoc comparisons utilizing data from prior published studies in other countries. Clearly, this provides some insights into differences across cultures but suffers from a number of methodological shortcomings. In fact, the conclusions of Evaristo, Karahanna, and Srite (2000) were as follows:
In summary, we suggest that there are mainly three points where the MIS cross-cultural research is lacking: lack of theory base (testing or building); inclusion of culture as antecedents of constructs, and general improvement in methodologies used.
All three points are related, although to different extents, to methodological issues. The conclusion is that one critical issue that cross-cultural research in MIS needs to address before reaching the same level of sophistication and quality already attained by mainstream MIS research is to attend to methodological concerns. The current article is a step ahead in this direction and sets the stage for future research.
KEY TERMs
Conceptual/Functional Equivalence: Refers to whether a given construct has similar meaning across cultures.
Construct Bias: Occurs when a construct measured is not equivalent across cultures both at a conceptual level and at an operational level.
Equivalence in Construct Operationalization: Refers to whether a construct is manifested and operationalized the same way across cultures.
Item Bias: Refers to measurement artifacts.
Item Equivalence: Refers to whether identical instruments are used to measure the constructs across cultures.
Method Bias: Refers to when subjects across cultures do not respond to measurement scales in the same manner. Bias in the scores on an instrument can arise due to characteristics of the instrument or its administration.
Multinational Corporation: A firm that has operations in multiple countries.
Scalar Equivalence: Refers to whether the construct is measured on the same metric across cultures. This occurs if the instrument has achieved all prior levels of equivalence, and the construct is measured on the same metric.
Transcultural Organization: A firm that operates across multiple cultures.