CHAPTER III RESEARCH METHODOLOGY
3.4 Phase 2
3.4.7 Data Analysis
This study explores the relationship between factors in small sports event brand building, and then uses the path model to test. The proposed model includes the direct relationship between event positioning, awareness (recall, recognition), event image (competitive image, service image, facility image, safety image, and organizational image), participate feeling, and participate judgement. Impact, event image, and perception directly impact the resulting event resonance. Furthermore, the mediating effect of event image, participate feeling, and participate judgement between event awareness and event resonance is to verify the application of Keller’s brand building theory in the brand building of small-scale sports events. The following section provides detailed confirmatory factor analysis results and normalized path analysis results.
Normality test
Before model testing, one could judge the normal distribution of the data by checking the kurtosis and skewness of the data. When the skewness and kurtosis values are in the range of ±1.96, the data is normally distributed at the level of 0.05, and ±2.58 means the data is normally distributed at the level of 0.01 (J. F. J.
Hair, Anderson, Tatham, & Black, 1998).
Structural Equation Modeling (SEM)
This study used structural equation modeling (SEM) to analyze the proposed small-scale sports event, brand-building model. Structural equation modeling is usually analyzed using a two-step approach. The first step was the measurement model, and the second step was the structural model. Measurement models should precede structural models (James, Mulaik, & Brett, 1992; Anderson, J.
R., & Gerbing, 1988). The first step was to used confirmatory factor analysis (CFA) to measure the proposed model. The purpose of CFA is to apply Keller’s brand- building theory to the field of sports events and analyze whether the factors and scales are consistent with previous research (Keller, 2001; Berry, 2000; Duan Yanling & Liu
61
Bing, 2017; Joreskog & Sorbom, 1996). It also shows the goodness-of-fit, discriminant validity, and convergent validity of the model, which provided the evaluation of scores and the structure designed by the researchers for the subsequent structural model. To further verify the influential relationships between the factors, the study carried out the second step model test, including a path test and mediation effect analysis. The measurement and structural models were tested using the same fit criteria, and it provided the detailed measurement and structural model results found in the next sections. For a complete model evaluation, this study analyzed descriptive statistics using SPSS 23, and SEM using Amos 23. The researchers tested the measurement model and the structural model using the same fit index criteria.
The absolute fit index. The absolute fit index determines how well the prior model fits the sample data (Mcdonald & Ho, 2002). Some indicators provide the most basic indication of how well the proposed theory fits the data (Hooper, Coughlan, & Mullen, 2008). The chi-square value is a traditional measure used to assess the overall model fit (Hu & Bentler, 1999), but it is sensitive to sample size.
The chi-square test almost always rejects the model at large sample sizes (Bentler, P., Bonett, Douglas, & G., 1980; Jöreskog, Sörbom, & Inc, 1993). Small samples may fail to distinguish between models that simulate well and models that simulate poorly (Kenny & McCoach, 2003). Due to the limitations of the chi-square test, researchers used relative/normed chi-square (χ2/df) as a surrogate measure to evaluate model fit, minimizing the effect of sample size on model chi-square (Wheaton, Muthen, Alwin,
& Summers, 1977). The recommended value for canonical chi-square is less than 3 (R. B. Kline, 2004). Because RMSEA is sensitive to the number of estimated parameters in the model, it tells one how well the model with optimally chosen parameter estimates fits the overall covariance matrix (Byrne, 1998). It is considered one of the most informative fit indices (Diamantopoulos & Siguaw, 2000). When RMSEA is less than 0.07, the model is considered a good fit (Steiger, 2007). The goodness-of-fit statistic shows how close the model replicates the observed covariance matrix (Diamantopoulos & Siguaw, 2000). The statistic ranges from 0 to 1. A value of GFI over 0.9 is traditionally regarded as indicating a good fit (J. Hair et al., 2006). Additionally, the closer to 1, the better the model fit. However, given the index’s sensitivity, it has been suggested that it should not be used (Sharma,
62
Mukherjee, Kumar, & Dillon, 2005). AGFI is related to GFI and is a correction to GFI. According to the adjustment of degrees of freedom, the value of AGFI is between 0 and 1. Generally, greater than or equal to 0.9 indicates that the model fits well. It has also been suggested that greater than or equal to 0.8 indicates an acceptable model fit (Marsh, Hau, Balla, & Grayson, 1998). Considering the influence of sample size on these two indices, they are usually judged comprehensively with other indices and are not regarded as independent judgment indicators (Hooper et al., 2008). Root mean square residual (RMR) is the square root between the residual of the sample covariance matrix and the hypothesized covariance model. The RMR value of a well-simulated model is less than 0.05 (Mcdonald & Ho, 2002).
The incremental fit index. The incremental fit index is also known as the relative fit index (Miles & Shevlin, 2007). The statistic in this group of indicators is the Normed Fit Index (NFI), and the NFI recommended value is greater than 0.9 (Bentler et al., 1980). The Non- Normed Fit Index (NNFI, also known as the Tucker- Lewis index), which can exceed 1.0 due to its non-canonical nature, is difficult to interpret (Byrne, 1998). Experts have proposed as low as 0.8 as a cutoff value, but recently 0.95 has been suggested as the recommended threshold (Hu & Bentler, 1999). The Comparative Fit Index (CFI) considers the sample size and is a modified form of the NFI (Tabachnick & Fidell, 2007). The CFI needs to be greater than 0.9 to guarantee that an incorrectly formulated model will not be accepted (Hu & Bentler, 1999), and a value greater than 0.95 may be better (Hu & Bentler, 1999). Moreover, this study showed the model’s goodness of fit, discriminant, and convergent validity.
It scored assessments and provided investigator-designed structures for subsequent structural models. In order to further verify the influence relationships between the factors, the second step model test and the path test were carried out.
From the perspective of the model, multiple indicators meant that the model was complex, the absolute value of the measurement error was also large, and the model parameters also had relatively large standard errors (Little, Cunningham, Shahar, & Widaman, 2002). The project-packaging method appropriately solves this contradiction (Bandalos, 2002). The overall fitting of the initial model in this study was reasonable, but there were still unsatisfactory indicators. Considering the complexity of the model, the model was packaged into projects. Data analysis with
63
model complexity and low commonality was suitable for project-packaging (Marsh et al., 1998). The packing method was not suitable for measurement models. If the research purpose was to analyze the relationship between variables, and understand the relationship between latent variables, the packaging method was very useful (Little et al., 2002; Yan & Zhonglin, 2011). When the item was single-dimensional and homogeneous in the population, the structure parameter estimation was unchanged when the packing method was used, compared with the direct use of the original item (Sass & Smith, 2006; Yuan, Bentler, & Kano, 1997). Some errors in the item packaging will cancel each other out, the measurement error of the new indicator is relatively small, and the proportion of the common factor in the observation score increases, thus increasing the degree of commonality (Matsunaga & Masaki, 2008).
After packaging, the fitting degree of the model was improved. Table 18 shows the detailed goodness-of-fit index after model-packing.
Reliability and Validity
The reliability of the model was tested by the factor loading, composite reliability (CR) value and Cronbach's alpha coefficient (Fornell & Larcker, 1981;
Nunnally & Bernstein, 1994). Factor loadings greater than 0.6, CR and Cronbach alpha greater than 0.7 were considered acceptable values (Fornell & Larcker, 1981; R Kline, 2011).
Construct validity is judged by convergent validity and discriminant validity. Convergent validity was tested by CR and the average variance extracted (AVE) values. The AVE of each latent variable needs to meet a standard higher than 0.5 (J. F. J. Hair et al., 1998). Discriminant validity was determined by the absolute value of the correlation coefficient. It needs to be less than 0.5 (J. F. J. Hair et al., 1998) and less than the square root of the corresponding AVE (Baojuan & Zhonglin, 2011; Wanli, 2008).
Path Analysis
The critical ratio of standardized estimates greater than 2 indicates a significant difference from zero at the 0.05 level (Arbuckle & Wothke, 1999).
Mediation Analysis
The bootstrap method was used to estimate the indirect effects to test the mediating effect of the event image between event awareness, participant feelings,
64
and participant judgment. The mediating effect is between participant feelings and participant judgment between event image and event resonance. Considering the influence of the independent variable X on the dependent variable Y, if X affects Y through the influencing variable M, M is called an intermediary variable. If the independent variable affects the dependent variable through at least one mediating variable, then there is a mediating effect (Preacher & Hayes, 2008).
In this study, the statistical significance of indirect effects was obtained by using bias-corrected bootstrap confidence intervals (Arbuckle & Wothke, 1999).
Bootstrap is a method of re-sampling from a sample; that is, the original sample is regarded as the Bootstrap population, and the bootstrap sample is repeated from the Bootstrap population to obtain a Bootstrap sample similar to the original sample (Wen, Marsh, & Hau, 2010). For this Bootstrap sample, the estimated value of the coefficient product can be obtained, all of which are denoted as {𝑎̂𝑏̂}. Using the bias- corrected confidence interval, the so-called bias-corrected non-parametric percentile bootstrap method is a method with higher testing power. If the confidence interval does not contain 0, the coefficient product is significant (Fritz & Mackinnon, 2007;
Preacher & Hayes, 2008; Taylor, Mackinnon, & Tein, 2008; Zhonglin, Hongyun, &
Jietai, 2012).