The magnitude of completeness (Mc) of an earthquake catalogue, above which all earthquakes are assumed to be detected, is of crucial importance for any statistical analysis of seismicity. Mc has been found to vary with space and time, depending in the longer term on the configuration of the seismic network, but also depending on temporarily increased seismicity rates in the form of short-term aftershock incompleteness. Most seismicity studies assume a constant Mc for the entire catalogue, enforcing a compromise between deliberately misestimating Mc and excluding large amounts of valuable data.
Epidemic-Type Aftershock Sequence (ETAS) models have been shown to be among the most successful earthquake forecasting models, both for short- and long-term hazard assessment. To be able to leverage historical data with high Mc as well as modern data, which is complete at low magnitudes, Leila Mizrahi, Shyam Nandan and Stefan Wiemer developed a method to calibrate the ETAS model when time-varying completeness magnitude Mc(t) is given. This extended calibration technique is particularly beneficial for long-term Probabilistic Seismic Hazard Assessment (PSHA), which is often based on a mixture of instrumental and historical catalogues.
In addition, the researchers designed a self-consistent algorithm which jointly estimates ETAS parameters and high-frequency detection incompleteness, to address the potential biases in parameter calibration due to short-term aftershock incompleteness. For this, they generalized the concept of Mc and consider a rate- and magnitude-dependent detection probability – embracing incompleteness instead of avoiding it.
To explore how the newly gained information from the second method affects earthquakes' predictability, Mizrahi, Nandan and Wiemer conducted pseudo-prospective forecasting experiments for California. Two features of our model are distinguished: small earthquakes are allowed and assumed to trigger aftershocks, and ETAS parameters are estimated differently. The researchers compare the forecasting performance of a model with both features and two additional models, each having one of the features to the current state-of-the-art base model.
Preliminary results suggest that their proposed model significantly outperforms the base ETAS model. They also find that the ability to include small earthquakes for the simulation of future scenarios is the main driver of the improvement. This positive effect vanishes as the difference in magnitude between newly included events and forecast target events becomes large. Mizrahi, Nandan and Wiemer think that a possible explanation for this is provided by the findings of previous studies, which indicate that earthquakes tend to preferentially trigger similarly sized aftershocks. Thereby, besides being able to better forecast relatively small events of magnitude 3.1 or above, the researcher gained a useful insight that can guide them in developing of the next, even better, earthquake forecasting models.
The paper "The Effect of Declustering on the Size Distribution of Mainshocks" is published here: https://doi.org/10.1785/0220200231
This work has received funding from the European Union’s Horizon 2020 research and innovation program under Grant Agreement Number 821115, real‐time earthquake risk reduction for a resilient Europe (RISE).