Photo by Çağlar Oskay on Unsplash
The Conversation
December 07, 2024
At around 10:44 Pacific Time on December 5, a huge earthquake struck around 60 miles off the coast of California. The magnitude 7 quake triggered a tsunami alert for some cities in northern California.
Fortunately the potentially catastrophic wave never appeared and the warning was later rescinded. Although many people reported experiencing an alarming shaking, thus far there have been no stories of serious casualties from the quake, with California residents typically reporting only minor damage.
A narrow escape like this is a reminder of the devastation that earthquakes in the area have the potential to cause. Residents might rightly be asking, why are we not able to better predict these quakes so that we have more advanced warning? Why has there been so little progress in predicting these catastrophic natural disasters over the decades?
The truth is that earthquake prediction is extremely hard. The tectonic plates that tessellate the globe and the fault lines where they meet are extremely complex.
Trying to pick out what is a clear signal of a precursor to a potentially catastrophic shift versus the normal background noise of the Earth’s movement is difficult. Add human activities like building work, traffic or even music concerts to the mix and the task becomes near impossible.
It’s also the case that earthquakes don’t always have consistent warning signs or precursors. You can measure seismic activity as accurately as you like, but if there genuinely is no warning sign to indicate an imminent quake, then it isn’t going to help.
Despite decades of research the scientific consensus is that individual earthquakes cannot be reliably predicted. But that doesn’t mean we can’t say anything about the likelihood of a large earthquake occurring at a particular place over a particular period of time.
I can make a prediction right now that, based on the earthquake frequencies in the two areas, my hometown of Manchester will experience fewer earthquakes of magnitude 4 or above than San Francisco over the next 12 months. It’s almost certain that I will be correct. This sort of future-facing projection is what the seismologists would call a forecast, rather than a prediction.
When the amount of energy an earthquake releases is plotted against the frequency with which those earthquakes occur, a distinctive relationship emerges (see the figures below). This is the celebrated Gutenberg-Richter law. The data for earthquakes over the 50-year period from 1970 to 2020 ranges from over 40,000 magnitude 4.5 earthquakes releasing around 350,000 million joules of energy each, to just two magnitude 9.1 earthquakes releasing nearly three million million million joules of energy each.
Because the two quantities (energy and frequency) vary so widely, the relationship is easier to see when plotted using logarithmic scales (as in the right-hand panel of the figure below). When we do this, the data falls neatly onto the straight line predicted by the Guttenberg-Richter law.
The frequencies of earthquakes around the world (1970-2020) and how much energy they release:
The Gutenberg-Richter relationship seems to indicate that earthquakes follow a very predictable pattern. Knowing how often smaller earthquakes occur in a particular region can allow us, therefore, to predict how often the larger and less frequent, but more deadly, quakes will occur.
Although this doesn’t allow us to predict the time, place and size of earthquakes – what scientists would refer to as a prediction – it does provide us with vital information that tells us whether the expected frequency of earthquakes in an area makes it worthwhile expending time and money preparing for them.
San Francisco, for instance, has a 51% forecasted probability of experiencing a magnitude 7 earthquake or higher over the next 30 years. For a city like that, in a relatively wealthy country like the US, it makes sense to invest significantly in earthquake preparedness. Even if the quake could be predicted precisely and all loss of life minimised, the economic cost of rebuilding the city’s infrastructure would be catastrophic in itself.
Contrastingly, in a less wealthy country where similarly powerful earthquakes are forecast to happen less often, the expenditure to make the country quake-proof might not be justifiable.
The historian Edward Gibbon wrote in his memoirs that the laws of probability are “so true in general, so fallacious in particular”. Despite the fact that the Gutenberg-Richter law appears to demonstrate that seemingly unpredictable earthquake occurrences can be spectacularly well-behaved, it is a long way from being a crystal ball.
It cannot foretell the precise date and time of the next big quake. Instead, it is limited to providing only the probability that a quake above a given size will occur in given time period.
This doesn’t mean these forecasts are useless. Far from it. They allow us to prepare for a range of scenarios, allocating resources appropriate to the risk and likelihood of each.
Exactly how we should trade off preparing for events with low probability but high potential for disaster against events with higher probabilities but lower danger is a question that those in power, who will ultimately be responsible for the consequences of these choices, will have to grapple with. They should not pretend, however, that because we cannot predict the specific timing of any particular disaster that there is nothing we can do to prepare for them.
Christian Yates, Senior Lecturer in Mathematical Biology, University of Bath
This article is republished from The Conversation under a Creative Commons license. Read the original article.
At around 10:44 Pacific Time on December 5, a huge earthquake struck around 60 miles off the coast of California. The magnitude 7 quake triggered a tsunami alert for some cities in northern California.
Fortunately the potentially catastrophic wave never appeared and the warning was later rescinded. Although many people reported experiencing an alarming shaking, thus far there have been no stories of serious casualties from the quake, with California residents typically reporting only minor damage.
A narrow escape like this is a reminder of the devastation that earthquakes in the area have the potential to cause. Residents might rightly be asking, why are we not able to better predict these quakes so that we have more advanced warning? Why has there been so little progress in predicting these catastrophic natural disasters over the decades?
The truth is that earthquake prediction is extremely hard. The tectonic plates that tessellate the globe and the fault lines where they meet are extremely complex.
Trying to pick out what is a clear signal of a precursor to a potentially catastrophic shift versus the normal background noise of the Earth’s movement is difficult. Add human activities like building work, traffic or even music concerts to the mix and the task becomes near impossible.
It’s also the case that earthquakes don’t always have consistent warning signs or precursors. You can measure seismic activity as accurately as you like, but if there genuinely is no warning sign to indicate an imminent quake, then it isn’t going to help.
Despite decades of research the scientific consensus is that individual earthquakes cannot be reliably predicted. But that doesn’t mean we can’t say anything about the likelihood of a large earthquake occurring at a particular place over a particular period of time.
I can make a prediction right now that, based on the earthquake frequencies in the two areas, my hometown of Manchester will experience fewer earthquakes of magnitude 4 or above than San Francisco over the next 12 months. It’s almost certain that I will be correct. This sort of future-facing projection is what the seismologists would call a forecast, rather than a prediction.
When the amount of energy an earthquake releases is plotted against the frequency with which those earthquakes occur, a distinctive relationship emerges (see the figures below). This is the celebrated Gutenberg-Richter law. The data for earthquakes over the 50-year period from 1970 to 2020 ranges from over 40,000 magnitude 4.5 earthquakes releasing around 350,000 million joules of energy each, to just two magnitude 9.1 earthquakes releasing nearly three million million million joules of energy each.
Because the two quantities (energy and frequency) vary so widely, the relationship is easier to see when plotted using logarithmic scales (as in the right-hand panel of the figure below). When we do this, the data falls neatly onto the straight line predicted by the Guttenberg-Richter law.
The frequencies of earthquakes around the world (1970-2020) and how much energy they release:
The Gutenberg-Richter relationship seems to indicate that earthquakes follow a very predictable pattern. Knowing how often smaller earthquakes occur in a particular region can allow us, therefore, to predict how often the larger and less frequent, but more deadly, quakes will occur.
Although this doesn’t allow us to predict the time, place and size of earthquakes – what scientists would refer to as a prediction – it does provide us with vital information that tells us whether the expected frequency of earthquakes in an area makes it worthwhile expending time and money preparing for them.
San Francisco, for instance, has a 51% forecasted probability of experiencing a magnitude 7 earthquake or higher over the next 30 years. For a city like that, in a relatively wealthy country like the US, it makes sense to invest significantly in earthquake preparedness. Even if the quake could be predicted precisely and all loss of life minimised, the economic cost of rebuilding the city’s infrastructure would be catastrophic in itself.
Contrastingly, in a less wealthy country where similarly powerful earthquakes are forecast to happen less often, the expenditure to make the country quake-proof might not be justifiable.
The historian Edward Gibbon wrote in his memoirs that the laws of probability are “so true in general, so fallacious in particular”. Despite the fact that the Gutenberg-Richter law appears to demonstrate that seemingly unpredictable earthquake occurrences can be spectacularly well-behaved, it is a long way from being a crystal ball.
It cannot foretell the precise date and time of the next big quake. Instead, it is limited to providing only the probability that a quake above a given size will occur in given time period.
This doesn’t mean these forecasts are useless. Far from it. They allow us to prepare for a range of scenarios, allocating resources appropriate to the risk and likelihood of each.
Exactly how we should trade off preparing for events with low probability but high potential for disaster against events with higher probabilities but lower danger is a question that those in power, who will ultimately be responsible for the consequences of these choices, will have to grapple with. They should not pretend, however, that because we cannot predict the specific timing of any particular disaster that there is nothing we can do to prepare for them.
Christian Yates, Senior Lecturer in Mathematical Biology, University of Bath
This article is republished from The Conversation under a Creative Commons license. Read the original article.
No comments:
Post a Comment