• Tidak ada hasil yang ditemukan

estimates for each school-by-year observation and subsequently standarize them.

Our measure of school climate comes from teachers’ responses on a yearly statewide survey in Tennessee, which was first administered in 2011–12. Part of the survey includes items that assess teachers’ perceptions of school climate. Examples of items include, “The staff at this school like being here; I would describe us as a satisfied group”, and “There is an atmosphere of trust and mutual respect within this school.” Using factor analysis, we reduce the full set of responses to a single teacher-by-year score, then average these scores within each school-by-year cell. Details on the factor analysis, including the full set of items, is available in Appendix III.9. Importantly, the survey window runs from early March to the middle of April, which is roughly one month prior to the last day of school. This timing means that many teachers likely complete the survey prior to knowing whether their principal will remain in the school for the next year.

where i, j, and t index schools, districts, and school years, respectively, and k is years relative to a principal transition. The parameters of interest are βk, which are the coeffi- cients on the indicators for years relative to a principal transition (Dk) interacted with the treatment indicator Ti. The omitted category isk= 0, which is the final year of the depart- ing principal. Also included in the model are time-varying school characteristics (enrollment size and the proportions of black, Hispanic, and FRPL-eligible students), school fixed effects, and district-by-year fixed effects. School fixed effects account for unobserved, time-invariant school characteristics that are related to both school performance and the likelihood that a principal leaves their position, such as the quality of school facilities or the characteristics of the neighborhood. Inclusion of district-by-year fixed effects is also important, as they account for secular trends and shocks at the district and state level, such as superintendent turnover, changes to human resources policies, or the implementation of a high-stakes eval- uation system. Their inclusion also restricts the identifying variation to districts that have multiple schools, which includes 94% and 99% of school-by-year observations in Missouri and Tennessee, respectively. We cluster standard errors at the school level in all models.

For βk to be an unbiased estimator of the causal effect of principal turnover, we must assume that performance in “treated” schools (i.e., schools that changed principals after year t) would have followed the same trajectory as “comparison” schools (i.e., schools that did not change principals after yeart). However, we know from prior work that this assumption is unlikely to hold. Specifically, schools that change principals after year t tend to be on a downward trajectory in terms of student achievement (Miller, 2013). Thus, even conditional on school fixed effects and the other controls in the model, there likely remains substantial endogeneity bias. In more concrete terms, the decline in school performance in treatment schools means that parallel trends do not hold between treatment and comparison schools, which undermines the assumption that the comparison schools constitute a plausible coun- terfactual for the treatment schools. We address this challenge by constructing a matched comparison group that does not change principals after year t but experiences the same

downward trend in student achievement, has similar demographic characteristics, and has a similar history of teacher and principal turnover. Prior work demonstrates that combining matching with a difference-in-differences approach can more successfully mitigate bias than either method on its own (e.g., Mueser et al., 2007).

III.3.2 Constructing a Comparison Group

For schools that change principals, we construct a comparison group by matching to schools from the same state that did not experience a principal transition in the given year. For example, schools in Tennessee that changed principals in 2012 are matched to schools in Tennessee that did not change principals in 2012. The purpose of our matching strategy is to construct a comparison group that meets the parallel trends assumption for our difference- in-differences model. The details of our matching strategy are as follows. First, we estimate a logistic regression model (separately for each state and year) that predicts the probability of having principal transition in the current year as a function of (1) current and lagged (up to five years) school achievement levels in math and reading, (2) current and lagged proportion of new-to-school teachers, (3) binary indicators for principal transitions in each of the prior five years, (4) current and lagged principal experience, and (5) current school demographics. Using the estimated propensity scores from this model, we employ a kernel matching algorithm to construct a comparison group of schools. Additionally, we restrict our matching to the area of common support, which drops a handful of treatment schools in each year (see Table III.A4). While the excluded schools are few in number, they are those with the highest estimated propensity scores (i.e., schools with the lowest achievement levels and highest teacher/principal turnover). Figure III.A11 demonstrates good overlap between treatment and comparison schools after matching. Additionally, Table III.A5 shows that the treatment and comparison groups are similar in terms of observable characteristics.

The key identification assumption of our difference-in-differences model is that treatment schools would have followed the trajectory of comparison schools in the absence of treatment.

Specifically, given the use of school fixed effects, district-by-year fixed effects, and our match- ing strategy that employs up to five lags in the outcome variable, any potential confounders would need to be (a) time-varying at the school level, (b) correlated with principal turnover in the current year, (c) not fully explained by current and lagged achievement and teacher turnover, and (d) correlated with future school performance. This strategy rules out, for example, bias from principal turnover due to falling school performance or a principal tran- sition induced by superintendent turnover. It may not rule out bias from a change in school inputs that are coincident to the principal transition and potentially improve school perfor- mance, such as a district appropriating additional funds for after-school programs or building improvements. Nonetheless, this approach represents a substantial step toward isolating the causal effect of principal turnover.

III.3.3 Modeling Multiple Events

An additional challenge in estimating the effect of a principal transition is that schools can experience multiple principal transitions in a short span. For instance, in both states the mean number of years between principal transitions is 4.2. In Missouri, the modal school changed principals twice across the study period (2001–2015). Due to the shorter time span in Tennessee, the modal school changed principals only once (see Figure III.1). Still, many schools experienced multiple “treatments.”

The problem of multiple events has not received much attention in the literature, and there is no commonly accepted method for event studies with multiple events (Sandler and Sandler, 2014). In the study most relevant to ours, Miller (2013) notes that many of the schools in her data experienced multiple principal transitions. Her approach was effectively to treat the principal transition as the unit of observation and arrange the data to have one physical observation per event per time period (i.e., “stacking” the data). For example, if a school changed principals in 2010 and again 2012, both “treatments” are modeled by including the school twice. In this hypothetical school, outcomes in 2013 represent both three

years after a transition (in reference to 2010) and one year after a transition (in reference to 2012). Using evidence from Monte Carlo simulations, Sandler and Sandler (2014) suggest a different approach. Specifically, they propose to modify the standard event study approach, which includes a full set of mutually exclusive dummy variables for event time, to allow multiple indicators to be “turned on” at one time. For instance, in the aforementioned example of a school in 2013 with principal transitions in 2010 and 2012, each school-by-year observation is included only once in the model, but the indicators for one year since turnover and three years since turnover are both set to one.

We estimate models that employ both of these approaches. For our difference-in-differences model, we follow the “stacking” approach used by Miller. Here, it is necessary to treat the principal transition as the unit of observation because of our matching strategy that con- structs a comparison group for each specific turnover event. Appendix III.8 demonstrates via simulation that our stacking approach yields reasonable estimates of the impact of turnover.

However, we also use the approach suggested in Sandler and Sandler (2014) to estimate event study models that show descriptively the dynamics of school performance before and after a principal transition.

III.3.4 Examining Different Types of Principal Turnover

In addition to the average principal transition, we also seek to estimate the effect of different types of principal turnover on school performance. Here, we again estimate a difference-in- differences model, but we also construct a new comparison group for each type of turnover.

Re-matching is important because different types of principal transitions likely reflect dif- ferent circumstances in a school in the years leading up to turnover. For example, relative to the average principal transition, the pre-transition decline in performance is substantially steeper for demotions, where requires us to construct a comparison group that is more heav- ily weighted by lower-performing schools. We show the matching details for each type of principal turnover in Appendix III.7.