USING THE QTM CATALOG
3.2 Interevent Time Distribution in the Full and Declustered QTM and HYS CatalogsCatalogs
The interevent time (IET) distribution of earthquakes provides a powerful means to characterize temporal correlation in earthquake time series, where interevent times are calculated as the time between consecutive events (Ξπ‘π = π‘πβπ‘πβ
1) in the catalog. If the background seismicity in Southern California is consistent with a stationary Poisson process, the interevent times of mainshock events should follow an exponential distribution (Eq. B.1, B.3). When using complete catalogs, including
Figure 3.1: Seismicity map of California, showing the locations of earthquakes from the Quake Template Matching (QTM) catalog (Ross et al., 2019) and the relocated SCSN (HYS) catalog from Hauksson et al. (2012) and Lin et al. (2007). Stars mark the locations of earthquakes greater than M 6.5 since 1981 through 2018. Black lines are Quaternary faults. Black arrow is the MORVEL Pacific plate motion, relative to North America, from DeMets et al. (2010).
foreshocks and aftershocks, interevent times are generally observed to follow a gamma distribution (Eq. B.5) (Corral, 2004; Davidsen and Kwiatek, 2013; Hainzl et al., 2006). The exponential distribution is in fact just a special case of the gamma distribution. A gamma distribution is expected to result from the superposition of background seismicity obeying a stationary Poisson process and Omori-type sequences of aftershocks (Molchan, 2005) or events triggered using an ETAS model accounting for both aftershocks and foreshocks (Hainzl et al., 2006), and as such, is often more suited to non-declustered catalogs. In this case, the coefficient 1/π½ (Eq. B.6) is argued to quantify the fraction of mainshocks (Molchan, 2005; Hainzl et al., 2006). Both the non-declustered HYS and QTM catalogs are compared with the best-fitting gamma distribution and exhibit good overall fits (Figure 3.2). The
mainshock fractions as estimated from the gamma distribution are about 29% and 46% for the HYS and QTM catalogs, respectively. The HYS catalog deviates from the gamma distribution at the highest interevent times, showing a βfat tailβ that leads to the rejection of the gamma distribution for this catalog (Figure 3.2 a). By contrast, the fit with the QTM catalog is more compelling. The IET distributions of the non-declustered catalogs deviate strongly from the exponential distribution, exhibiting a huge fat tail, as would be expected when aftershocks are present in the catalog (Figure 3.2 c-d). Deviations from the gamma distribution suggest that the background seismicity in the HYS catalog, and possibly in the QTM catalog as well, is not consistent with a stationary Poisson process. It is therefore probable that the fit with a gamma distribution does not yield a very reliable estimate of the fraction of mainshocks forming the background seismicity. Investigating that possibility further requires extracting the background seismicity from the catalogs.
We identify and remove aftershocks and foreshocks from the catalogs using the declustering algorithm of Zaliapin and Ben-Zion (2013). The approach identifies the potential parent event of each earthquake based on its distance in time and space from other earthquakes, measured with the Baiesi and Paczuski metric (Eq.
B.8) (Baiesi and Paczuski, 2004). This metric depends on the earthquake time, the distance between earthquake hypocenters or epicenters, the magnitude of the possible parent, and the fractal dimension of the earthquake hypocenter distribution, which is typically about 1.6. This distance is split into its space and time components and normalized by the parent magnitude (Eq. B.9). With this approach, the earthquakes are separated into two distinct populations (Figure 3.3). The first mode, where events are close together in space and time, consists of clustered events, and the second mode, where events are farther away in space and time, consists of background events with no parents. In each cluster, only the event with maximum magnitude is selected so that both foreshocks and aftershocks are removed. A key advantage of this approach is that it does not remove aftershocks or foreshocks on the assumption that the remaining field should be Poissonian, as was assumed by Gardner and Knopoff (1974), which makes it a better method for identifying clusters and analyzing the statistics of the remaining point field. The mainshock fraction derived from declustering the HYS catalog is 36%, a value inconsistent with the 29%
mainshock fraction obtained from taking 1/π½of the best-fitting gamma distribution, using a logic that assumes that the background seismicity is Poisson and stationary.
The mainshock fraction derived from declustering the QTM catalog is 30%, even more inconsistent with the value of 46% obtained from the best-fitting gamma
101 102 log (t/t0)
100 102 104
Cumulative Count
HYS Catalog
Mmin= 2.5 p-value = 0 = 0.29 = 3.4 (a)
Gamma Dist.
Catalog Data
100 102
log (t/t0) 100
102 104
QTM Catalog
Mmin= 0.3 p-value = 0.34 = 0.46 = 2.2 (b)
0 20 40 60 80 100
Interevent Time: t/t0 100
102 104
Cumulative Count
Mmin= 2.5 p-value = 0 t0 = 0.32 (c)
Poisson Model Catalog Data
0 5 10 15 20 25
Interevent Time: t/t0 100
102 104
Mmin= 0.3 p-value = 0 t0 = 0.01 (d)
0 10 20 30 40
Interevent Time: t/t0 100
102 104
Cumulative Count
Mmin= 2.5 p-value = 3e-17 t0 = 0.88 (e)
0 5 10 15
Interevent Time: t/t0 100
102 104
Mmin= 0.3 p-value = 0.94 t0 = 0.036 (f)
Figure 3.2: Comparison of the earthquake IET for the HYS (a,c,e) and QTM (b,d,f) catalogs for Southern California. (a,b) show the fit of non-declustered catalogs to a gamma distribution. (c,d) show the fit of non-declustered catalogs to an exponential distribution (i.e., Poisson process). (e,f) show the fit of declustered catalogs to an exponential distribution. The IETs of each panel are normalized relative to their respective mean interevent time,π‘
0. Only IETs from earthquakes above the respective magnitude of completeness, Mπ, for each catalog are plotted. The p-value represents the probability that the data come from a gamma or Poisson distribution, respectively.
distribution. This highlights the importance of not assuming a stationary Poisson distribution in the declustering process.
-14 -12 -10 -8 -6 -4 -2 Nearest Neighbor Distance, log 0
0.05 0.1 0.15 0.2
QTM Density
(a)
10-8 10-6 10-4 10-2 Rescaled Time, T 10-4
10-2 100 102
Rescaled Distance, R
(b)
0 5 10 15 20 25 30 35
-15 -10 -5
Nearest Neighbor Distance, log 0
0.05 0.1 0.15 0.2
HYS Density
(c)
10-8 10-6 10-4 10-2 Rescaled Time, T 10-4
10-2 100 102
Rescaled Distance, R
(d)
0 1 2 3 4 5
Figure 3.3: Declustering results for the QTM catalog (a-b) and the HYS catalog (c-d). (a,c) Nearest neighbor histogram for all events in the catalog, showing a bi-modal distribution between clustered events and mainshocks. (b,d) Space-time density for all events in the catalog. The lower bright horizontal mode represents the clustered events (foreshocks and aftershocks), and the higher mode represents the background events. For the QTM catalog, the white line that separates the clustered and background modes is selected visually and given by log10π +log10π =β4, and for the HYS catalog, the line separating the modes is given by log10π +log10π =β5. The proportion of mainshocks vs.
clustered events in the QTM catalog is roughly 30% and 70%, respectively, and for the HYS catalog is roughly 36% and 64%, respectively.
If background events do follow a Poisson process, then the interevent times follow the exponential distribution (Eq. B.3), which depends solely on the mean interevent timeπ‘π. For the QTM catalog, we first select all events with magnitude larger than the reported magnitude of completeness of 0.3 (Ross et al., 2019). The declustered catalog seems fairly consistent with the exponential distribution despite the fat tail at higher interevent times (Figure 3.2f). The magnitude-frequency distribution of earthquakes in the QTM catalog actually departs from the Gutenberg-Richter distribution for magnitudes smaller than about 2 (Figure 3.4a). This is probably
due to the fact that the detection threshold of the template matching procedure is inhomogeneous in space. It is probably as low as 0.3 in the vicinity of the earthquakes used as templates and closer to M 2 elsewhere. Importantly for our analysis, this inhomogeneity does not affect the IET distribution since the Poisson test does not require the magnitude of completeness to be uniform in space. A subset of independent events resulting from a Poisson process will still be a Poisson process, albeit characterized by a lower rate. To verify this point, we tested the Poisson model for different cutoff magnitudes and found no sign of any systematic effect (Figure 3.5). The magnitude of completeness of the declustered QTM catalog is about 2 (Figure 3.4a), and the cumulative number of earthquakes with time in the declustered QTM catalog mostly follows a straight line, indicative of a stationary Poisson process (Figure 3.4b). The magnitude of completeness of the HYS catalog is estimated to be about 2.5 (Zaliapin and Ben-Zion, 2015) (Figure 3.4c). The declustered catalog is inconsistent with the exponential distribution due the prominent fat tail at higher interevent times (Figure 3.2e).
The analyses of the HYS and QTM catalogs thus seem to yield somewhat con- tradictory results. From Figure 3.2 c-f, it is clear that the data of both exhibit a tail, deviating from the Poisson model at higher interevent times, though that of the HYS catalog is significantly larger than that of the QTM catalog, especially in the declustered case (Figure 3.2 e-f). The tails observed in the declustered catalogs are similar in shape to those in the non-declustered set, though not as prominent (note the difference in interevent time range on the abscissa). This raises the question of whether the fat tail is simply due to the presence of remaining aftershocks in the declustered catalogs, as aftershocks are inherently a non-stationary process. A non-stationary background rate would also lead to a departure from the exponential distribution. We test the deviation from the Poisson model to determine its statisti- cal significance, and later test the effect of including aftershocks using a synthetic catalog.