• Tidak ada hasil yang ditemukan

Chapter 3 Research Methodology

3.7 Evaluation Of Measurement Model And Structural Model

3.7.1 Measurement Model Assessment

According to Hair et al. (2017), the initial assessment of PLS-SEM relies on the measurement model. The researcher should examine the reliability and validity of multiple variables to ensure the constructs are within an acceptable range for consistency and accuracy. In the following subsections, the rules of thumb of internal consistency, indicator reliability, convergence validity and discriminant validity were discussed.

Page 63 of 139 3.7.1.1 Internal Consistency Reliability

PLS-SEM software provides several approaches to assess internal consistency reliability. Traditionally, Cronbach’s alpha is adopted to estimate the reliability of latent variables, holding the assumption that all indicators are equally reliable (Hair et al., 2017). Cronbach’s alpha is commonly used to measure the internal consistency when there are multi-items measurement instruments (Cronbach, 1951). In general, the greater the value of Cronbach Alpha, the higher the reliability tested for that particular construct. Although the acceptable threshold for Cronbach’s alpha is often debated among scholars from various backgrounds and the acceptable range of Cronbach’s alpha determined remains vague, a generally accepted rule of thumb is tabulated below (Van Griethuijsen et al., 2015).

Table 3.3: Rule of Thumb for Cronbach’s Alpha

Cronbach’s alpha Label

0.9 and above Excellent

0.8-0.9 Adequate

0.7-0.8 Marginal

0.6-0.7 Seriously suspect

0.6 and below Totally unacceptable

Source: Zeller, R. A. (2005). Measurement error, issues and solutions. In Encyclopedia of Social Measurement (pp. 665–676).

Due to its assumption of equal indicator loading and insensitive flaw, Cronbach’s alpha often underrates the internal consistency reliability (Hair et al., 2017). Similarly, Rho A (ρA),proposed by Joreskog & Van Thillo (1971), is considered another alternative to compensate for the insensitivity of Cronbach’s alpha (Dijkstra & Henseler, 2015).

Additionally, composite reliability is mostly referred to as it considers the weightage of outer loadings of the indicators. The composite

Page 64 of 139

reliability works similarly to Cronbach’s alpha. The higher the value, the greater reliability it projects. The value of composite reliability with its respective level of acceptance is listed in Table 3.4. Notably, composite reliability greater than 0.95 is considered undesirable for its high tendency to result in inflated correlations (Hayduk & Littvay, 2012).

Table 3.4: Rule of Thumb for Composite Reliability

Composite reliability Label

0.95 and above Undesirable

0.7-0.90 Satisfactory to good

0.6-0.7 Acceptable (Specifically for exploratory research)

0.6 and below Totally unacceptable

Source: Hair, J. F. J., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM) (2nd ed.). Sage

publications.

To sum up, composite reliability will be recorded and reported for its precision and robustness in analysing internal consistency reliability.

3.7.1.2 Convergent Validity

The broad use of the term ‘convergent validity’ is equated with the degree to which a measure correlates with other measures of the same construct (Hair et al., 2017). Simply put, it requires the indicators of specific variables to denote a high proportion of variance. In light of this, the outer loadings and the average variance extracted (AVE) matters.

The magnitude of outer loadings or indicator reliability should be 0.708 or greater than that (Hair et al., 2017). In most cases, 0.70, which is relatively close to 0.708, is also acceptable, while outer loading below

Page 65 of 139

0.40 should, anyhow, be excluded from the variables (Hair et al., 2011).

However, any indicator below 0.70, greater than 0.40, requires further analysis to decide on retention or deletion based on its improvement in the internal consistency reliability and AVE.

As far as AVE is concerned, the common rule of thumb is 0.50 or higher (Hair et al., 2011). The rationale of AVE should greater than 0.5 can be explained as there are at least half of the variance of the indicators existed among the indicators. A low AVE implies more variance remains in the error of the indicators, which in turn suggests low convergent validity.

3.7.1.3 Discriminant Validity

The term ‘discriminant validity’ suggests the differences from one construct to another. The cross-loadings and Fornell-Larcker criterion are traditionally used to estimate the discriminant validity. Conversely, the debate on the overall poor performance manages to stand with reasonable explanations. Scholars found that both cross-loadings and the Fornell-Larcker criterion do not provide reliable statistical evidence to detect discriminant validity issues (Henseler et al., 2015). Cross- loadings face complications when two constructs are perfectly correlated, while Fornell-Larcker criterion fails to indicate discriminant validity issues when the indicator loadings have minimal differences (Hair et al., 2017).

With that being, (Henseler et al., 2015) introduced the heterotrait- monotrait ratio (HTMT) to assess the ratio of the between-trait correlations to the within-trait correlations. It measures the true correlation between constructs if they are perfectly reliable (Hair et al.,

Page 66 of 139

2017). The lower the HTMT value, the greater the discriminant validity.

It is worthwhile to note that the threshold for HTMT remains debatable.

The HTMT value below 0.85 may be considered a conservative approach, while HTMT ranging between 0.85 to 0.90 can be acceptable as well (Henseler et al., 2015). As debates on HTMT threshold value continue faring, the scholars recommended performing bootstrapping to derive bootstrap confidence interval. If the bootstrap confidence interval indicates the value of 1, the lack of discriminant validity is certain (Hair et al., 2017). Else, it suggests that the latent variables are distinctive from one another. In alignment with the suggestions given by Hair et al.

(2017), this study primarily adopted the HTMT assessment.

Dokumen terkait