• Tidak ada hasil yang ditemukan

According to Saunders et al. (2007), internal validity concerning questionnaires is defined as "the ability of your questionnaire to measure what you intend it to measure" (p. 366).

Based on Bolarinwa (2015), validity can be classified into theoretical and empirical constructs. The theoretical constructs include content validity and face validity. Content validity refers to the extent to which the questionnaire items provide adequate coverage of investigated construct (Saunders, et al., 2007). Content validity can be achieved in several ways. For example, a panel of experts on the subject can review the questions (Bolarinwa, 2015). Another is to depend on the construct definition through the literature review (Saunders, et al., 2007).

Face validity is established when an expert reviews the questions and states that the question measures the trait of interest; however, some authors consider face validity similar to content validity (Bolarinwa, 2015).

Several threats to validity jeopardise the ability of the researchers to draw a correct conclusion from the experiment. According to (Creswell, 2014), there are several types of validity threats, internal and external threats, statistical conclusion validity, and construct validity threats.

124

Internal validity threats are "experimental procedures, treatments, or experiences of the participants that threaten the researcher's ability to draw correct inferences from the data about the population in an experiment" (Creswell, 2014, p. 223).

External validity threat occurs when the researchers make an incorrect inference from the sample data to other individuals; the characteristics of the individuals chosen in the survey might affect the ability to generalise. Statistical conclusion validity occurs due to inadequate statistical power or violation of statistical assumptions. Construct validity threat occurs due to inadequate construct definition; accordingly, inadequate items were used to investigate the construct (Creswell, 2014).

Accordingly, to achieve theoretical validity in this research, the questionnaire shall be demonstrated to a panel of experts, and any required amendments will be made.

Additionally, constructs and operational definitions mentioned earlier in this chapter will be compared to the questionnaire items to establish content validity. As for the questionnaire reliability, all the questionnaire scales were adopted from the literature, and previous reliability results were reviewed as indicated in section 6 and presented in Appendix (4).

Moreover, a pilot test will be conducted on (30) respondents to evaluate the response time, collect notes and perform an internal reliability test.

3.5 Statistical Analysis

Statistical analysis for this study was conducted using SPSS and AMOS. The following steps were followed to guarantee that the final results of the analysis were reliable.

3.4.1 Handling Missing Data

Missing data can result from an error of entry or omission of answers from respondents.

Missing data can be handled in several methods. For example, imputation methods compute values based on all available observations.

125

Missing data can produce several problems, such as creating a bias in the correlation between variables. When handling missing data, special care was given to observe any patterns or relationships between missing values to stay close to the original distribution when any missing data remedies are applied (Hair, et al., 2010). Nevertheless, missing data issues can be eliminated by using an online data-gathering method that makes every question a compulsory question and importing the resulting data to the statistical analysis package.

3.4.2 Outliers

Outliers are observations with unique characteristics that make them different from other observations. They could not be categorised as beneficial or a problem unless viewed within the context (Hair, et al., 2010). There are three ways of detecting an outlier univariate, bivariate, and multivariate. This research is multivariate research with more than two variables involved. Accordingly, multivariate outlier detection techniques are the most appropriate techniques to be used. Mahalanobis D2 is a multivariate outlier technique; the value D2/df will be calculated, and a threshold of three will be used to designate possible outliers (Hair, et al., 2010). After detecting outliers, they should be studied, and a decision to retain or omit them should be made. According to (Hair, et al., 2010), outliers should be retained unless found to be aberrant or not representative of any observation.

3.4.3 Validity and Reliability (Post Data Collection)

Empirical validity includes construct and criterion-related validity (Bolarinwa, 2015). This study focuses on construct validity, mainly on three types of validity: convergent, discriminant, and factor.

Convergent validity is defined as the degree to which two separately measured variables hold a relation to each other. At the same time, discriminant validity provides proof that one concept is different from another related concept. Finally, factorial validity is considered an

126

empirical extension of content validity. Factorial validity is usually applied when a construct has multiple dimensions (Bolarinwa, 2015).

This study uses confirmatory factor analysis (CFA) to confirm the factorial validity and calculate convergent and discriminant validity. According to Harrington (2009), CFA is needed to confirm exploratory factor analysis findings in new samples. Since this research adopts scales that have been used previously in the literature, CFA is needed to confirm previous results in new samples. To consider an adequate CFA model parameter, any item with a factor loading below 0.50 will be eliminated (Harrington, 2009). The value of various indices are to be reviewed to consider the model fitness, and the indices are absolute fit index chi-square (c2), Parsimony Correction Indices, root mean square error of approximation (RMSEA), Comparative Fit Indices (CFI), and the Tucker-Lewis index (TLI).

3.4.4 Descriptive Analysis

Descriptive analysis is a group of techniques that describe the data. The measures that are used in this study are a measure of central tendency that provide information regarding the middle part of a group of numbers, and central tendency techniques include mean, median and mode (Black, 2019). In this study, mean scores for all the research variables were calculated. The second technique is the measure of variability, describing the dispersion of a data set (Black, 2019). The range, the standard deviation and the variance are examples of variability measures. The last technique that was used is the measures of shape that are utilised to describe the shape of the data set. Those measures include Skewness and Kurtosis (Black, 2019). According to (Garson, 2012), skewness and kurtosis are used to describe normality. The skewness and kurtosis values are divided by the standard error, and a common cut point is -+ Two.

127 3.4.5 Bivariate and Multivariable Analysis

3.4.5.1 Correlations

Correlations are "a measure of the degree of relatedness of variables." The statistic r is the person product-moment coefficient, and the r is the measure of the linear relation between the variables. It stipulates the direction and strength of the linear relationship between two variables (Black, 2019).

3.4.5.2 Regression

Multiple regression is considered an appropriate analysis method when there is a single dependent variable (Hair, et al., 2010). In order to perform multiple regression, a few assumptions must be checked, including outliers, normality and multicollinearity.

Multivariate normality was checked depending on the Q-Q plot, and residuals scatter plot.

Multicollinearity was checked using a variance inflation factor (VIF) with a cut point of

<four and tolerance of >0.1 (Garson, 2012).

3.4.5.3 Mediation Test

In order to test the mediating role of employee engagement, the steps designated in (Hayes, 2009) were followed. Hayes goes beyond Baron and Keny's "casual steps approach" for mediation testing to test the effect's significance. The variable intervention model (Figure (12) guides the mediation effect calculations. The simple steps approach requires calculating each path effect and then determining whether a variable acts as a mediator by examining if specific statistical criteria are met. The latest analytical approach is to quantify the indirect impact and find out whether it is significant or not. One of the techniques used is bootstrapping, which generates an empirical representation of the indirect impact's sampling distribution. The sample is repeatedly resampled during the analysis, and this process is

128

repeated a certain number of times. As Hayes (2009) recommended, it will be repeated 5000 times. Additionally, Hayes created PROCESS Macro that can be added to SPSS to examine the mediation impact.

.