• Tidak ada hasil yang ditemukan

Moderated, or conditional, indirect effects are an increasingly popular type of complex mediation model in the social sciences. However, measures of explained variance for moderated effects in general have received little attention in the methodological literature. This section will review the literature for these effects in ANOVA and MLR, noting gaps in terms of the themes of standardized effect sizes previously described (e.g., generalizability, biasedness of estimators).

The goal of this section is to establish a coherent framework for explained variance for moderated effects in MLR that will serve as a foundation for extending  to moderated mediation models. An empirical demonstration of the conditional effect size will be provided using the running empirical example, and R software code will be provided.

It is a common in the social sciences to hypothesize that the effect of one variable on another varies across identifiable populations. For example, it is possible that the effect of an early childhood intervention designed to improve reading is different for boys than it is for girls, or for children from lower SES neighborhoods than from higher SES neighborhoods. A more complete understanding of how effects vary in direction and magnitude can have important consequences for the reporting of study results.

In traditional MLR, the partial effect of a variable on an outcome is assumed to be constant across levels of all other variables in the regression model, which precludes

investigating conditional effects. However, moderation hypotheses can be investigated in MLR by incorporating additional variables that are products of other variables in the model, where effects of such product terms are often referred to as interactions. The unstandardized effect of a variable x1 on y conditional on levels of a moderating variable x2 is expressed as

81

0 1 1 2 2 3 1 2 ,

y=B +B x +B x +B x x + (5.1)

where B3 is the partial effect of the interaction of x1 and x2 on y controlling for x1 and x2. Because the effect is nonlinear (the expected value of y changes for different values of the

predictors), it is assumed the assumptions required for additive regression models (i.e., normality of errors, linearity, homoscedasticity, existence, independence of errors) apply across all

combinations of predictor values. The significance of the interaction is determined by testing the significance of B3, or testing the significance of the increment in R2 due to including the

interaction term in the model (Cohen et al., 2003).

If the effect of x1 on y is of particular interest, x1 would be considered the focal predictor, and x2 the moderator variable. Equation 5.1 can be rearranged to more closely resemble this distinction as

0 ( 1 3 2) 1 2 2 .

y=B + B +B x x +B x + (5.2)

The effect of the focal variable x1 can now be said to vary across levels x2. The magnitude of the interaction term B3 is then the difference in the effect of x1 on y corresponding to a one unit increase in x2. For example, if x2 is binary, B3 is the difference in the effect of x1 on y in one group relative to the effect in a reference group.

If the interaction coefficient is significantly different from zero, the moderation can be further examined by probing and plotting the effect of the focal predictor conditional on values of the moderator (Aiken & West, 1991). The effect of a predictor at a given level of a moderator (typically at the moderator mean, and 1 SD) is referred to as a simple slope. The simple slope also may be tested for significance using a conditional standard error. Interpretation of the

82

moderator effects can be facilitated by plotting these simple slopes at various levels of the moderator. An alternative to testing simple slopes at fixed values of the moderator is to construct simultaneous CIs for the effect of the predictor across the range of moderator values (Johnson &

Neyman, 1936). Regions where the CIs do not include zero are values of the moderator where the simple slope is significant.

5.1.1 Moderated MLR in LISCOMP

Although the moderated MLR in Equations 5.1 and 5.2 are instructive, it is desirable to express a moderated MLR model in matrix form as in Section 3.1. Despite the long history of methodological research on moderated MLR, the appropriate matrix representation of an interaction in MLR has not been addressed. At issue is how to specify the interaction term not only for notational convenience, but also to make use of the results derived in Section 3.1 regarding the properties of estimators.

An obvious approach would be to model the product term as a new predictor in the matrix specification, as is commonly done when estimating interaction effects in MLR. This specification would yield the correct parameter estimates for the models in Equations 5.1 and 5.2, and including the product variable in the variance/covariance matrix allows for computations of variances, covariances, and R2. However, the specification also presents some issues.

Whereas it is typical to standardize coefficients by scaling the coefficients by the ratio of the standard deviation of the predictor to the standard deviation of the outcome, this is not an appropriate standardization for the product term (Champoux & Peters, 1987; Muthén &

Asparouhov, 2015; Wen, Marsh, & Hau, 2010). This could be avoided by computing the product term from standardized variables, a straightforward solution but requiring extra data

83

management. A more problematic issue is that this specification includes a non-linear effect, which is not directly estimable in SEM software. (Kenny & Judd, 1984).

These issues can (in theory) be avoided by specifying the moderated MLR model in reduced form (Equation 5.2) as a SEM. The LISCOMP specification for latent interactions (Klein & Moosbrugger, 2000; Klein & Muthén, 2007), assuming variables are standardized, is expressed as

st = st st+ stst st+ ,

η B η jη Ω η ζ (5.3)

where j is a m1 vector designating the interaction outcome variable, Ωst is a square matrix of interaction coefficients

1,2 1,

1,

0

0 .

0

0 0

p st

p p

 

 

 

 

= 

 

 

Ω (5.4)

Solving for ηst and substituting into the measurement model yields interaction model for manifest variables

( st stst)1 .

= − −

y I B jη Ω ζ (5.5)

This means that the outcome is conditional on values of ηst. This avoids the problem of standardization when centered variables are used to create the product term because the

interaction term will be scaled by a product of standard deviations rather than the variance of the product (Champoux & Peters, 1987). However, if the product term is created from uncentered variables, the variance of the product is a complex function of variable means, variances, and covariances. In addition, it also follows that if the elements of ηst are centered at 0 and

84

uncorrelated, Equation (5.5) reduces to y= −(I Bst)1ζ (Chapter 3). The equivalent MLR expression is

 .

= + +

y X ΩX ε (5.6)

For a MLR model with two predictors and an interaction, the coefficient matrix Bst is

1 2

0 0 0

0 0 0 ,

0

st

 

 

 

=  

 

 

B (5.7)

and Ωst is

0 3 0

0 0 0 .

0 0 0

st

  

 

=  

 

 

Ω (5.8)

The placement of the 3 coefficient in Ωst designates which variable is the focal predictor and which is the moderator. In Equation 5.8 the placement of 3 designates x2 as the focal predictor and x1 as the moderator (i.e., the effect of x2 varies across levels of x1). To designate x1 the focal predictor, the [2,1] element of Ωst would contain the interaction coefficient, with element [1,2] being 0.

In summary, the LISCOMP framework provides a flexible framework for representing moderated regression models in MLR. Like the alternative specification method where the interaction is modeled as a separate variable, LISCOMP returns the desired parameter estimates and is suitable for expression in quadratic form (Section 3.24). However, a clear advantage is that the specification exists within a more general modeling framework. Although uncentered product terms cannot be easily centered in this specification, it should be noted that it is often

85

recommended by methodologists to center variables when conducting moderated regression analysis as centering can be used to aid in the interpretation of effects and remove non-essential multicollinearity among variables (Aiken & West., 1991).