5.4 Quantitative Data Analysis and Results
5.4.3 Analysis of Determinants of Citizens’ Adoption of m-Government Services in Tanzania
5.4.3.1 Sample Appropriateness, Reliability and Validity
This section presents validity and reliability results on quantitative data collected on responding to part two of the questionnaire (question Q11), which gathered citizens’ views on various influences on adoption. Using the Kaiser-Meyer-Olkin (KMO) test and Bartlett’s tests of sphericity, the samples’ appropriateness for analysis was determined. Furthermore, various reliability and validity tests were carried out to examine the quality of the measurement model, that is, the conceptualized model. For reliability, both constructs and the scale reliability were examined; for model validity, both convergent and discriminant (divergent) validity were assessed. Using Cronbach’s Alpha values, composite reliability values (CR), average variance extracted (AVE) values, maximum-shared variance (MSV), and average shared variance (ASV) model validity and reliability were examined.
Sample Appropriateness: Table 5.10 presents results on Kaiser-Meyer-Olkin (KMO) tests and Bartlett’s tests of sphericity for the quantitative data set. On the data set consisting of replies on citizen’s opinions regards factors influencing their adoption decision for m-government services, the KMO tests and Bartlett's test of sphericity indicated a significantly adequate sample for further analysis. According to Hair et al. (2010), a sample is appropriate for factor analysis when the KMO test value is greater than 0.5 at a significant Bartlett's tests of sphericity value p<0.05.
The KMO test returned 0.908 with a significant Bartlett’s test of sphericity value of 0.000;
implying the data is significantly suitable for factor analysis.
97 Table 5.10: KMO and Bartlett's test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .908
Bartlett's Test of Sphericity
Approx. Chi-Square 12322.212
df 703
Sig. .000
Sample Reliability: Table 5.11 summarizes the results for internal reliability in terms of Cronbach Alpha values. Cronbach’s Alpha test has been widely applied to assess reliability in quantitative research (Komba, 2016; Venkatesh, Thong & Xu, 2012; Oliveira et al., 2016). Hair et al. (2016) note that Cronbach’s Alpha assesses reliability by summarizing the extent to which items in a given set are interrelated. The effect of the interrelationship is measured through a scale of coefficients ranging between 0 and 1; Hair et al. (2006) recommend acceptable satisfactory reliability being values above 0.7. Moreover, Hinton (2004) classified reliability into four different clusters; values above 0.90 signify excellent reliability, between 0.90 and 0.71 signify high reliability, between 0.70 and 0.50 means moderately high reliability, and values below 0.50 signify low reliability.
Therefore, in line with Hinton’s (2004), Hair et al.’s (2010) and Taherdoost’s (2016) application of criteria for benchmarking conditions for acceptable reliability, the model generally demonstrated a significantly good degree of reliability. Cronbach Alpha values for all assessed variables, namely performance expectancy (PE), hedonic values (HV), self-efficacy (SE), attitude influences (AI), subjective norms (SN), technological influences (TI), financial influences (FI), facilitating conditions (FC), and behaviour intention (BI), were higher than 0.8, which is above the benchmark criteria value of 0.7 (Table 5.11). A similar application of the cut-off criteria is in Venkatesh, Thong & Xu (2012), Oliveira et al. (2016), and Tarhini, Hone & Liu (2014). The Cronbach Alpha values ranged between 0.930 (for technological influences), the highest, and 0.854 (for performance expectancy), the lowest value, which is classified between excellent and high reliability respectively. Generally, the model indicated high reliability for assessing citizens’
adoption of m-government services.
Table 5.11: Cronbach Alpha values for construct reliability
Construct Cronbach Alpha
Value
Reliability Classification
Performance Expectancy (PE) 0.854 High
Hedonic Values (HV) 0.920 Excellent
Self-Efficacy (SE) 0.887 High
Attitudinal Influences (AI) 0.860 High
Subjective Norms (SN) 0.911 Excellent
Technological Influences (TI) 0.930 Excellent
Facilitating Conditions (FC) 0.919 Excellent
Financial Influences (FI) 0.895 High
Behaviour Intention to Use (BI) 0.923 Excellent
Sample Validity: Table 5.12 presents results on the assessment of the quality of the research measurement model. In line with Hair et al.’s (2016) argument on the appropriateness of composite reliability in establishing the internal reliability compared to Cronbach’s Alpha test, the measurement model was further analysed using factors analysis. The scale reliability was thus evaluated based on composite reliability (CR) values and item loadings on the model. Hair et al.
(2010) demonstrated that CR values greater than 0.7 indicate an acceptable degree of reliability for conducting factors analysis. Applying a similar CR cut-off value as used by Ooi & Tan (2016), reliability of the scale measure used was discriminated only for CR values greater than 0.7.
Moreover, Oliveira et al. (2016) argued that for an item to be included in a scale, its loading value should be at least equal to or greater than 0.7. Important to note is that facilitating conditions (FC) is observed by two sets of indicators, that is, trust and security (TS) and behaviour control (BC) indicators, as explained in section 3.6.1. According to Susanto & Goodwin (2011), citizens are motivated to adopt only if systems and structures are available to support the use, security and safety of any transactional data.
According to Table 5.12, the measurement scales used to measure the eight variables presented a significantly good degree of reliability (at p<0.001). The CR values for the eight variables, that ranged between 0.844 and 0.930, were significantly higher than 0.7. Likewise, loadings values for
99
items used to measure the variables were significantly reliable (at p<0.001), with values ranging between 0.654 and 0.939, satisfying Oliveira et al.’s (2016) condition of equal or greater than 0.7.
Generally, the scales used to measure the constructs demonstrate a reasonable degree of reliability; thus, the measurement model has a reasonable degree of reliability.
The validity, on the other hand, was assessed in terms of convergent validity and discriminant validity. Convergent validity is assessed by comparing average variance extracted (AVE) against CR values, while discriminant validity was examined by comparing maximum shared variance (MSV) and average shared variance (ASV) against average CR values. Where AVE values are higher than CR values, convergent validity is confirmed. Similarly, where MSV and ASV values are less than AVE, discriminant validity is established (Hair et al., 2010; Ooi & Tan, 2016;
Tarhini, Hone & Liu, 2014). Results in Table 5.12, as opined by Ooi & Tan (2016), confirm that measures used to observe the eight constructs are measuring the constructs. CR values for all the variables were found to be higher than their AVE values. The AVE values for all the items on the scale measure were higher than 0.5; that is, the constructs explain more than 50% of the variance in their scales, indicating a good measurement model. Additionally, discriminant validity was established since MSV and ASV values for all the eight constructs were less than related AVE values. Therefore, according to Hair et al. (2010), and Tarhini, Hone & Liu’s (2014) criterion, the measurement model demonstrates satisfactory discriminant validity.
Table 5.12: Measurement model quality assessment criterion (n=396)
Construct Items Loadings t-values AVE CR MSV ASV Performance Expectancy
(PE)
PE3 0.826 35.420***
0.644 0.844 0.317 0.147 PE2 0.848 38.936***
PE1 0.728 33.015***
Hedonic Values (HV)
HV3 0.874 3.561***
0.795 0.921 0.358 0.071 HV2 0.929 4.842***
HV1 0.871 7.530***
Self - Efficacy (EE)
SE3 0.749 8.150***
0.74 0.894 0.358 0.063 SE2 0.939 3.743***
SE1 0.882 5.530***
Attitudinal Influences (AI)
AI3 0.734 14.742***
0.686 0.866 0.166 0.102 AI2 0.939 10.651***
AI1 0.798 12.698***
Subjective Norms (SN)
SN6 0.759 23.271***
0.617 0.906 0.513 0.255 SN5 0.784 23.697***
SN4 0.771 23.220***
SN3 0.809 23.310***
SN2 0.843 24.285***
SN1 0.743 25.490***
Mobile Technology Effect
(TI)
TI6 0.722 34.285***
0.69 0.93 0.327 0.154 TI5 0.786 33.321***
TI4 0.861 32.558***
TI3 0.834 33.287***
TI2 0.926 33.533***
TI1 0.842 31.883***
Facilitating Conditions (FC)
TS4 0.835 14.522***
0.575 0.923 0.533 0.242 TS3 0.846 13.996***
TS2 0.869 16.762***
TS1 0.759 19.634***
BC5 0.701 10.834***
BC4 0.707 12.839***
BC3 0.684 21.583***
BC2 0.735 22.025***
BC1 0.654 23.644***
Financial Influences (FI)
FI5 0.657 8.954***
0.653 0.903 0.533 0.258 FI4 0.726 6.991***
FI3 0.867 16.638***
FI2 0.924 19.539***
FI1 0.837 21.487***
Behavior Intention to Use (BI)
BI1 0.86 33.611***
0.803 0.924 0.449 0.228 BI2 0.891 35.021***
BI3 0.936 35.374***
Note: ***Significant at p < 0.001
101