The original article was published in LinkedIn on May 17, 2017.

Paige Mamer

Senior Geophysicist - Microseismic and Induced Seismicity, LinkedIn Profile

During fantastic Induced Seismicity sessions at the Calgary GeoConvention, many speakers commented on the inadequate depth accuracy of their data; “700 m uncertainty, that’s unacceptable!” I saw shocked faces, nods, and heads shaking in disagreement.

Well, wait a minute! Why the varied reactions? Is 700 m good or bad?

Indeed we do need to know where induced events are coming from. We need to know this to design better horizontal drilling and completion programs (or disposal programs), to know where to map faults for hazard assessment and mitigation, to build better models, we need to allay fears of ground water contamination (even if faults aren’t slipping kilometers up towards the surface, we still need to be able to prove to the public, and potentially lawmakers, that this isn’t the case).

But while criticizing the event uncertainty, nobody discussed the design of the arrays used to capture the presented data. This, I would argue, is the single, most important factor in deciding if 700 m accuracy is really terrible, quite reasonable, or perhaps even impressive. Yes, picking error and velocity model assumptions contribute to location accuracy, but processing doesn’t mean anything if you don’t have adequate data to process.

We are quite good at figuring out the lateral positioning of event, for the very reason that it is easy for us to put out a spread of sensors on the ground over the affected area. Depth is tricky because it is very difficult to fill an underground volume with sensors – the notable exception being underground mining where sensors can be placed in a 3D volume throughout the mine.  We get around this by putting more and more sensors on the ground in a targeted area – the better accuracy we want, the more sensors we need (think satellites to get a GPS location). Surface arrays with a hundred to thousands of sensors are routinely used to measure fracturing in rock as loud as my fingers typing on this keyboard. Arrays built to monitor for induced seismicity typically use fewer stations because the events are bigger and easy to ‘hear’.

Let’s get back to the question – how good is 700 m uncertainty? The short answer? It’s a question of scale. Let’s consider an array of 5 sensors placed within a 5 km radius of a multilateral pad (ie, many horizontal wellbores). Because the sensors are concentrated where the event is likely to be, we should be able to come up with a good location. In this case, a solution with 700 m resolution would leave us with more questions than answers and may not meet certain government regulations.  So 700 m isn’t great.

Now take those 5 sensors and spread them all over Alberta. The uncertainty is now on the scale of many kilometers because the sensors are far from the event (just like it’s harder for our ears to pinpoint the source of a noise that is far away). So a solution with 700 m uncertainty would be astonishingly fantastic! In fact at this scale seismologists often have to use a fixed depth to get an event epicentre (something to remember the next time you are perusing NRCAN’s Earthquakes Canada site).

What would it take to get that multilateral pad case from 700 m to 10 m? It takes substantially more sensors and a lot more money, which isn’t a scalable solution for companies in the best of times. That scale of uncertainty is even a challenge for denser microseismic monitoring of any kind.

If 700 m seems so outrageous, or even 500 m as is often used as a design target, are we under-designing our arrays or are we simply expecting too much from the arrays we opt to put in the ground?

* Special thanks to Mrs. Mamer who shared this article on our blog.