Remembering past earthquakes improves earthquake forecasts

Scientists developed a new and more accurate model to forecast earthquakes by mimicking the energy release of past earthquakes.
 

By Jeng Hann Chong, PhD Candidate at the University of New Mexico
 

Citation: Chong, J., 2023, Remembering past earthquakes improves earthquake forecasts, Temblor, http://doi.org/10.32858/temblor.300
 

Predicting the timing and location of an earthquake before it happens would be the ultimate preparation for mitigating hazards. Earthquake prediction is still not feasible and may never be. But in a new study, scientists have strengthened their forecasting models by looking at how fault ruptures behaved during previous earthquakes.

Earthquake forecasting models allow us to better prepare for earthquakes by providing earthquake probabilities – like weather forecasting models. In weather forecasting, there are plenty of data to be fed into the forecasting models, and using the statistical models is acceptable, says James Neely, a postdoctoral seismologist at the University of Chicago and the lead author of the new study. However, seismologists only have a small amount of data from past earthquakes for earthquake forecasting, which explains the challenges in statistical-based forecasting models.

Presently, earthquake forecasting models assume that earthquakes release all stored energy (strain), resetting faults every time an earthquake happens. However, this reset does not necessarily represent what is happening on a fault.

The researchers created a new forecasting model called the Long-Term Fault Memory (LTFM) model. The LTFM reflects partial strain release on a fault and the specific timing of past earthquakes. “This new model is more representative of what is happening on faults,” Neely says.

The researchers note that because the LTFM model considers more realistic fault behaviors, their method may produce more realistic earthquake forecasts.
 

Faults behaving like rechargeable batteries

“Faults are like batteries,” says Chris Goldfinger, a geologist at Oregon State University who was not involved in this study. A fault can discharge all the energy it has in one rupture event, or release part of its strain through a cluster of multiple, smaller earthquakes, he says.

One example of a fault with partial strain release is the San Andreas Fault system. Paleoseismic studies have found that the historical earthquake record on the San Andreas Fault is complex. There are large time periods with no earthquakes, punctuated with clusters of earthquakes. These events are sporadic, indicating earthquakes do not always happen within the expected average recurrence interval, according to the new research.
 

Pallett Creek, originally excavated in the 1970s, was one of the first to be studied to reveal the timing and magnitude of historical earthquakes along the San Andreas Fault. Credit: Michael R. Perry, Via Flickr, CC BY 2.0.
Pallett Creek, originally excavated in the 1970s, was one of the first to be studied to reveal the timing and magnitude of historical earthquakes along the San Andreas Fault. Credit: Michael R. Perry, Via Flickr, CC BY 2.0.

 

Although current earthquake forecasting models are efficient in producing a simple model for earthquake probability, these probabilities do not always match the geologic record of past earthquakes, Neely says.

For example, the Mojave section of the San Andreas Fault did not have any large earthquakes for 300 years before 1812, but an earthquake of similar magnitude occurred 45 years later in 1857. Using the current (traditional) forecasting models, Neely and his colleagues estimated a very low probability for the 1857 earthquake to occur.
 

An improved forecasting

Using LTFM, Neely and his colleagues tested the same section of the fault. They estimated a 41% probability of the 1857 earthquake. This was much higher than other model forecasts, which predicted there was a 1 to 27% likelihood that the earthquake would occur.

Neely and his colleagues also estimated the 30-year earthquake probability for the next large magnitude earthquake on the southern section of the San Andreas Fault. They found very similar earthquake probability between the current models and LTFM — around 35%.

One significant difference between the researchers’ model and current forecasting models is the earthquake probability diverges significantly in the upcoming centuries for the San Andreas Fault assuming an earthquake still hasn’t happened. In other words, the probability goes up the more time passes. The current models either have a constant or slightly decreasing earthquake probability for the next 200 years, whereas the LTFM estimates an increasing probability of an earthquake occurring over the centuries to come.
 

The future of earthquake forecasting

“We don’t know why faults have earthquake clusters and we are just beginning to have a long enough record to know what the [faults] did,” says Goldfinger. He adds that earthquake forecasting models can be improved as more historical and long-term earthquake data get documented — the models would be more realistic.

Although the LTFM is more complex than the current forecasting model, it is more robust and has a better foundation for earthquake forecasting models. “This [LTFM] model is a neat step in the right direction for earthquake forecasting,” says Goldfinger.

The LTFM model can also be applied to other settings such as the Cascadia Subduction Zone, Neely says. He notes they plan to test their model with different magnitude earthquakes in the future.
 

References

Neely, J.S., Salditch, L., Spencer, B.D., and Stein, S., (2022). A more realistic earthquake probability model using long-term fault memory, Bulletin of the Seismological Society of America. https://doi.org/10.1785/0120220083.