CHAPTER 6: QUANTITATIVE RESULTS
6.8 Structural Equation Modeling (SEM)
6.8.2 Validity Assessment of the Study Constructs
162 Figure 6.3 The Improved (Final) CFA Model
163 discriminant validity and nomological validity (Hair et al., 2014). This study assessed convergent validity and discriminant validity. Normalised validity was not assessed because convergent validity and discriminant validity are the most widely accepted form of validity. This section presents the convergent validity and discriminant validity of the three survey questionnaires.
6.8.2.1 Construct Validity
Reliability and validity are the most two critical elements of the instrument that should be tested for the researchers to confidently report on the results obtained from the survey (Burton &
Mazerolle, 2011). Validity refers to the degree at which the instrument is measuring what it was designed or intended to measure (Burton & Mazerolle, 2011). There are four main types of establishing the validity of the instrument, and they are face validity, content validity, criterion validity and construct validity. Confirmatory Factor Analysis can evaluate two aspects of construct validity: discriminant validity and convergent validity (Sun, 2015). After the content validity was established as explained in the research methodology chapter, the construct validity of the instrument was determined before data analysis procedures were computed.
Construct validity is the degree to which the operational measures correlates with the investigated theoretical concept (Burton & Mazerolle, 2011). Since the researcher used confirmatory factor analysis (CFA), one of its significant aims is to assess the construct validity of the proposed measurement theory, and its ability to determine the accuracy of measurement (Hair et al., 2014).
In examining construct validity, three components can be assessed; these are convergent validity, discriminant validity and nomological validity (Hair et al., 2014). In this current study, convergent and discriminant validity were examined. The study did not assess the nomological validity as the most commonly used/assessed and widely accepted are convergent and discriminant validity (Hair et al., 2014). The convergent and discriminant validity of the survey questionnaire is presented in the following section.
(i) Convergent Validity
The convergent validity is evaluated through the assessment of factor loading, average variance extracted, and composite reliability (Hair et al., 2014). According to Hair et al. (2014), the accepted value for standardised loading in confirmatory factor analysis (CFA) is 0.5, and the reliability scale
164 of 0.7 or more is considered to be good. The most commonly used or well-known criterion for assessing the degree of shared variance between the latent variables of the model is the Fornell- Larcker (1981). According to Fornell and Larcker (1981), the convergent validity of the measurement model can be assessed by the Average Variance Extracted (AVE) and Composite Reliability (CR).
AVE measures the level of variance captured by a construct versus the level due to measurement error, values above 0.7 are considered very good, whereas, the level of 0.5 is aa required minimum value for convergent validity; and
CR is a less biased estimate of reliability than Cronbach’s Alpha; the acceptable value of composite reliability is 0.7 and above.
The Average Variance Extracted (AVE) for construct is defined as follows:
AVE
=
∑ 𝜆𝑖2𝑛𝑖=1
𝑛
…
……….(1)Where AVE = average variance extract, λ = the standardized factor loading, i= number of items.
Composite Reliability (CR) is defined as follows:
CR = (∑ 𝜆𝑖
𝑛𝑖=1 )2
(∑𝑛𝑖=1 𝜆𝑖)2+(∑𝑛𝑖=1 𝑒𝑖)2 …………..(2)
Where λ = standardised factor loading, i= number of items, and e = error variance. In this study, a minimum of 0.50-factor loading based on recommendations by Hair et al. (2014) was used as the cut-off point. AVE of 0.50 or more rule was applied, and composite reliability of 0.70 or more was used in this study to assess the convergent validity. The convergent validity results of the study constructs used in the survey questionnaire are presented in Table 6.15.
165 Table 6. 15 Convergent Validity Results of the Survey Questionnaire
Construct Item Standardised Factor Loading
Average Variance Extracted (AVE)
Composite Reliability (CR)
R R1 0.836 0.505 0.753
R3 0.759
R4 0.669
R5 0.765
R9 0.695
L L1
L3 L4
0.719 0.679 0.7854
0.569 0.797
OB OB1
OB2 OB3
0.819 0.684 0.873
0.634 0.837
TC TC1
TC2 TC3
0.827 0.926 0.796
0.725 0.887
B B2
B3
0.852 0.739
0.636 0.777
RES RES1
RES2 RES3
0.964 0.617 0.513
0.524 0.754
Note: R= Reaction, L=Learning, OB=Training Objectives, TC=Training Content, B=Trainees
’Behaviour, R= Training Results
The results of the measurement model as shown in Table 6.15, reveal that all the standardised factor loadings (standardised regression weights) were above the minimum cut-off point of 0.5, with 0.513 being the lowest standardised factor loading. All the composite reliabilities were significant as they were all above 0.7 (Fornell & Larcker, 1981; Hair et al., 2014), and all the average variance extracted (AVE) values were more than 0.50 (Fornell & Larcker, 1981; Hair et
166 al., 2014). Thus, the figures demonstrated a high level of convergent validity for the latent constructs used in this study. The next section presents the discriminant validity.
(ii) Discriminant validity
In CFA, discriminant validity refers to “the distinctiveness of the factors measured by different sets of indicators” (Kline, 1998:60), and convergent validity refers to the cohesiveness of a set of indicators in measuring their underlying factor. Discriminant validity is the extent to which factors are distinct and uncorrelated (Statwiki, 2018). According to Hair et al. (2014), the discriminant validity can be assessed by comparing the average variance extracted for each of the two constructs with the square of the correlation estimate between these two constructs. Furthermore, Fornell and Larcker (1981) proclaim that the discriminant validity can be demonstrated significantly if the average variance extracted estimate is larger than the squared correlation estimate. According to Kline (1998), with a given estimated model, if different factors are not correlated excessively with each other (> 0.85), then there is evidence for discriminant validity and evidence for convergent Validity if a set of indicators all have relatively high structure coefficients with other factors that they are specified to measure. Table 6.16 presents the discriminant validity of the survey questionnaire.
Table 6. 16 Discriminant Validity Latent
Constructs
CR AVE MSV MaxR(H) RES R L OB TC B
RES 0.754 0.524 0.130 0.934 0.724
R 0.753 0.505 0.007 0.759 0.083 0.711
L 0.797 0.569 0.130 0.822 0.133 0.153 0.723
OB 0.837 0.634 0.016 0.860 0.128 0.321 0.036 0.755
TC 0.887 0.725 0.017 0.908 0.019 0.014 0.132 0.119 0.724
B 0.777 0.636 0.005 0.794 0.004 0.047 0.070 0.052 0.054 0.712
Note: CR= Composite Reliability, AVE = Average Variance Extracted, MSV= Maximum Shared Variance, MaxR(H) = Maximum Reliability.
167 Table 6.16 shows that the composite reliability (CR) of all the six latent constructs is higher than 0.70 and average variance extracted (AVE) exceeded also exceeded the minimum acceptable value of 0.50. The next section looks at the analysis of SEM and hypothesis testing. This indicates good construct reliability and convergent validity, respectively (Byrne, 2010).