RESEARCH METHODOLOGY
3.2 Quantitative Method
3.2.6 Analysis
Regarding the scale refinement, item discrimination of t-ratio, Cronbach’s alpha coefficient analysis, and EFA were carried out. Regarding the scale validation, CFA, and convergent, discriminant, and concurrent validity analysis were conducted. Finally, to achieve the main aim of this study which was to examine the roles of PsyCap in the JD-R model, SEM analyses were used.
3.2.6.1 Item Discrimination of t-ratio
t-ratio analysis of item discrimination is the comparison of the mean values between the lower group having scores below the 25th percentile and the upper group having scores above the 75th percentile of each item. If the independent t-test of an item is significant (p < .05), it demonstrates that the item has the power of discrimination.
3.2.6.2 Internal Consistency Assessment
To confirm the scale’s internal consistency reliability, as well as the appropriateness of domain sampling which means all items expected to be in a certain domain should hold similar average intercorrelations, the commonly used analysis is Cronbach’s alpha (Hinkin, 2005). Above .70 coefficient alpha reflects strong item
covariance and adequate domain sampling (Hinkin, 2005; MacKenzie, Podsakoff, &
Podsakoff, 2011). In case the existing items remain large, researchers can delete some items that do not improve the reliability of the scale (Hinkin, 2005).
3.2.6.3 Factor Analysis
Factor analysis is the common term that describes various statistics that explain a group of observed or measured variables (e.g., single items for a scale) in respect of a small number of hypothetical or latent variables (e.g., factors) (B. Yang, 2005). Two commonly used factor analysis methods are EFA and CFA.
EFA is generally used for exploring the factors that underlie a group of observed variables that reflect a phenomenon under study (B. Yang, 2005). The sample size to ensure sufficient variance and an accurate solution would be no less than 150.
In terms of statistical results, items that load on a specific factor greater than .40 should be retained; no item loads on two or more dimensions more than .40; items with the difference of factor-loading values between factors less than .20 should be eliminated from the scale; each factor must possess eigenvalues above one; and over 60% of the total item variance is necessary (Hinkin, 2005; Howard, 2016; B. Yang, 2005). Finally, another EFA comes after deleting some improperly loading items. In this study, a principal component and varimax rotation, the most commonly used techniques, were selected for factor extraction.
CFA is used for confirming a hypothesized factor structure, also called a measurement model, based on a priori research and theories or results from exploratory factor analysis. Statistically, CFA tests whether the observed variables’
correlation matrix from data collected is equivalent to that of the hypothesized measurement model (Hinkin, 2005; B. Yang, 2005). To conduct CFA, a satisfactory sample from a target population, ten subjects for each item or at least 200 subjects, is required (B. Yang, 2005). The goodness-of-fit measures indicating that the hypothesized measurement model fits the collecting data satisfactory are described in the next section.
3.2.6.4 Convergent and discriminant Assessment
Convergent validity means the indicators or items belonging to a specific construct, which is a scale in this respect, are supposed to have a high quantity of variance in common (Hair, Anderson, Tatham, & Black, 1998). One way to estimate the amount of convergent validity of a newly developed measure is to consider the value of (a) the average variance extracted (AVE) which is a measure of convergence of a group of items indicating an unmeasured variable or a latent construct; and (b) composite reliability (CR) which is a measure of internal consistency among a set of items indicating an unobserved variable or a latent construct. The values of AVE and CR higher than .50 and .70 respectively indicate that the convergent validity of the instrument is adequate (Fornell & Larcker, 1981; Hair et al., 1998).
Discriminant validity means a factor of a scale is supposed to be distinctive or explains some qualities that other factors in the scale do not (see Hair et al., 1998). One way to estimate the amount of discriminant validity of a newly developed measure is to conduct the Chi-square difference test of the 1-factor model (i.e., the scale is explained by one factor) with respect to the results of the other model (i.e., the scale is explained by more than one factor depending on theories). The significant results of the Chi-square difference test suggest adequate discriminant validity (see Cohen, 1996).
3.2.6.5 Structural Equation Modeling
In this study, SEM was conducted to examine concurrent validity, a form of criterion-related validity, which could be evaluated by relating the scores of the newly developed scales (i.e., the challenge-hindrance scale and the job resources scale) with those of the criterions (i.e., OLBI and UWES) which were measured at the same time (see Lounsbury, Gibson, & Saudargas, 2006). Also, SEM was used to examine the full JD-R model and the roles of PsyCap in the JD-R model.
SEM was firstly established in 1970 by Karl Jöreskog (Klem, 2000).
SEM is a tool that combines and simultaneously tests the path analysis (structural model) and confirmatory factor analysis (measurement model) among several variables (Burnette & Williams, 2005). Unlike the traditional path analysis that focuses on the structural relationship between only measured variables, path analysis of SEM involves
the structural relationship between unmeasured variables reflected by indicators, called observed variables (e.g., items, combinations of items, or the whole scales) based on theories (Burnette & Williams, 2005; Klem, 2000). Although path analysis uses linear equations to test causal linear relationships, and equations of path analysis and regression are similar, random measurement errors could be included only in the structural model of SEM (Burnette & Williams, 2005). CFA, described in SEM as the measurement model, is generally used to indicate the relations between observed variables and unobserved variables based on presented hypotheses derived from a priori theories or studies (Burnette & Williams, 2005).
Three keys steps of SEM are described as follows (Burnette & Williams, 2005). The first step is the model specification which involves stating the complete model under study including the measurement part (i.e., specifying the directional relations between latent variables and indicators, and between random measurement errors and indicators) and the structural part (e.g., specifying the directional relations among latent variables). The second step is parameter estimation whose the most commonly accepted method is maximum likelihood estimation (MLE) which rests on assumptions of normal distribution and reasonable sample size. The third step is to evaluate the fit. Goodness-of-fit estimates indicate whether the proposed model fits the data. There are several goodness-of-fit measures that specify a good fit of a model, such as high P value of 𝜒2, high values for the GFI, AGFI, CFI, TLI, and NFI, and low values for the RMSEA, RMR and SRMR.
Because the 𝜒2 significant test is affected by a large sample size (Hair et al., 1998), other statistical fit indices were considered instead. In this study, the criteria to determine an acceptable fit of a model were as followed: the ratio of CMIN/df) less than 5 (Bollen, 1989), CFI value greater than .90 (B. Yang, 2005) and TLI value greater than .90 (Hair et al., 1998), RMSEA value less than .08 (Browne &
Cudeck, 1993; Burnette & Williams, 2005), and SRMR value less than .08 (L. T. Hu
& Bentler, 1999).