• Tidak ada hasil yang ditemukan

Proposed Theoretical Models, Research Questions and Hypotheses

Study 1: Development and Preliminary Validation of Occupational Stress Scale for Soldiers (OSSS)

4.8 Phase 4: Confirmatory Factor Analysis (CFA)

Percentage of variance 22.5 15.25 12.80 12 Cumulative

Percentage of variance

22.5 37.75 50.55 62.5 5

Name of the factors Percentage of

variance

Cumulative Percentage of

variance

Job related 22.5 22.5

Individual/Personal 15.25 37.75

Administrative 12.80 50.55

Group/Team 12.00 62.55

Note. Factor 1=Job related stressors. Factor 2=Individual/Personal stressors. Factor 3=

Administrative stressors. Factor 4= Group stressors

number of factors and variables making up those factors are specified before carrying out the analysis. Thus, the role of CFA statistics is to suggest how well the specification of the factors matches reality (actual data) (Hair, Ringle, & Sarstedt, 2012). CFA is commonly used for establishing the validity of a single factor model, test the significance of a specific factor loading, test whether a set of factors are correlated or uncorrelated and assess the convergent and discriminant validity of a set of measures (DeCoster et al., 1998). In CFA, several statistical tests are used to determine how well the model fits to the data (Suhr, 2006). A “good model fit” indicates that the model is plausible (Schermelleh-Engel, Moosbrugger, & Müller, 2003).

The first step in conducting CFA is to develop a hypothesized measurement model using the factor structure that was obtained after conducting EFA to test its validity (Byrne, 2001). Once the measurement model was developed, the estimations were carried out using the Maximum Likelihood Method. It is the most common estimation procedure and is very dependable when there are violations of non-normality of the data (Hair, Ringle, & Sarstedt, 2012). As suggested in the literature, three key sets of parameters were estimated and reported while conducting CFA (Furr, 2011), these were: fit Indices, parameter estimates, and modification indices. Each of these parameters are discussed below:

4.8.2 Fit Indices

Following fit indices are commonly reported in CFA.

Chi-square/df (CMIN/df): The chi-squared test indicates the difference between the observed and expected covariance matrices. Values closer to zero indicate a better fit. Smaller the difference between expected and observed covariance matrices better is the model fit (Gatignon, 2010).

GFI (Goodness of Fit Index): The goodness of fit index (GFI) is a measure of fit between the hypothesized model and the observed covariance matrix (Baumgartner & Hombur, 1996). Values generally range from 0 to 1. A value of 0.9 and above is desirable for a good fit (Tuncer & Kaysi, 2013).

SRMR (Standardized Root Mean Square Residual): It is defined as the square root of the discrepancy between the sample covariance matrix and the model covariance matrix (Hooper, Coughlan & Mullen, 2008). The estimation values range from 0 to 1, with a value of 0.08 or less being indicative of an acceptable model (Hu, Bentler & Peter, 1999).

RMSEA (Root Mean Square Error of Approximation): RMSEA analyzes the discrepancy between the hypothesized model, with optimally chosen parameter estimates, and the population covariance matrix. It is able to avoid issues of sample size in fit estimation (Hooper, Coughlan & Mullen, 2008). The RMSEA ranges from 0 to 1, with smaller values indicating better model fit. A value of .06 or less is indicative of acceptable model fit (Brown, 2015).

CFI (Comparative Fit Index): The comparative fit index analyzes the model fit by examining the discrepancy between the data and the hypothesized model. It adjusts for the issues of sample size inherent in the chi-squared test of model fit (Gatignon, 2010). A CFI value of 0.95 or higher is presently accepted as an indicator of good fit (Hu & Bentler,1999).

NFI (Normed Fit Index): It takes values between 0 and 1. Higher values show better fit. Values greater than 0.90 are acceptable, while values greater than 0.95 are good fit. It is in the group of the fit indices based on independent model (Rahim, Civelek, & Liang, 2018).

Pclose: The value should be greater than 0.05 (Hu & Bentler, 1999).

The assessment of fit indices can provide the researcher with two choices: if they suggest that the model is fit and adequate, then the parameter estimates can be examined to evaluate the psychometric qualities of the model. On the other hand if they indicate that the model is unfit and inadequate, then the modification indices have to be examined so that revisions can be made to the model (Furr, 2011).

4.8.3 Parameter estimates

The parameter estimates are examined when the hypothesized model is deemed fit by fit indices.

In AMOS software, factor loadings are termed as Standardized Regression Weights. They range between -1 to + 1. These loadings are examined to check if there are any loadings lower than 0.50.

The standardized factor loading of all the items ranges should be above the threshold limit of 0.60 and ideally, 0.7 or higher above was suggested by Chin, Gopal & Salisbury (1997) and Hair, Ringle, & Sarstedt (2012). If not, then those variables are considered as candidates for deletion.

However, other results (for example, associated standardized residual value and squared multiple correlations) should also be considered before taking a decision to remove any variable.

4.8.4 Modification Indices (MI)

If the values of fit indices indicate an inadequate and poor fit, then modification indices are examined to identify possible revisions that can be made to the hypothesized measurement model to improve its model fit. The size of a modification index is examined to decide whether any changes can or should be made (Furr, 2011). Large MI values for variable(s) indicate that removing those particular variables will improve the model fit. It is common to delete such items from further

analysis (Hair, Ringle, & Sarstedt, 2012). Any decision to remove a variable should be taken after considering other aspects like standardized regression weights and standardized residual values.

In addition to this, deletions should be made in consultation with theory.

4.8.5 Model Re-specification

If any changes are made to the measurement model, (for example, deletion of an item) then the next step is to re-specify the measurement model. This means running the analysis again and re- estimating the model fit and other parameters to see whether the model is able to achieve an acceptable model fit now. A model fit is acceptable when the values of various fit indices are within the acceptable levels (benchmark values). Decisions regarding model re-specification are based on three main criteria: standardized regression weights (or estimated loadings), standardized residuals, and modification indices (Hair, Ringle, & Sarstedt, 2012).

4.8.6 Scale Validity using Convergent and Discriminant Validity

Convergent validity (or Average Variance Extracted) refers to the degree to which two measures of constructs that theoretically should be related, are in fact related (Trochim, 2006). Convergent validity can be established if two similar constructs correspond with one another, while discriminant validity applies to two dissimilar constructs that are easily differentiated. Campbell and Fiske (1959) stress the importance of using both discriminant and convergent validation techniques when assessing new tests. In other words, in order to establish construct validity, one has to demonstrate both convergence and discrimination. A successful evaluation of convergent validity shows that a test of a concept is highly correlated with other tests designed to measure

theoretically similar concepts (Trochim, 2006). Discriminant validity (or Maximum Shared Variance) tests whether concepts or measurements that are not supposed to be related are actually unrelated. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts (Campbell

& Fiske, 1959). The AVE value should be 0.5 or above and the MSV value should be less than the convergent validity value, that is, MSV<AVE (Hair, Ringle, & Sarstedt, 2012).

If there are convergent validity issues in the test, then the variables do not correlate well with each other as far as the parent factor is concerned. This means that the latent factor is not well explained by the observed variables. On the other hand, if the test is facing discriminant validity issues then the variables correlate more highly with variables outside the parent factor than with the ones inside the domain of the parent factor. This suggests that the latent factor is better explained by some other variables from a different factor than by its own observed variables (Hair, Ringle, &

Sarstedt, 2012).

The following section describes the details of CFA conducted (method and results) on the four factor structure generated in the EFA of phase 3.