Where’s my Earthquake?

Where’s my Earthquake?

The original article was published in LinkedIn on May 17, 2017.

Paige Mamer

Senior Geophysicist - Microseismic and Induced Seismicity, LinkedIn Profile

During fantastic Induced Seismicity sessions at the Calgary GeoConvention, many speakers commented on the inadequate depth accuracy of their data; “700 m uncertainty, that’s unacceptable!” I saw shocked faces, nods, and heads shaking in disagreement.

Well, wait a minute! Why the varied reactions? Is 700 m good or bad?

Indeed we do need to know where induced events are coming from. We need to know this to design better horizontal drilling and completion programs (or disposal programs), to know where to map faults for hazard assessment and mitigation, to build better models, we need to allay fears of ground water contamination (even if faults aren’t slipping kilometers up towards the surface, we still need to be able to prove to the public, and potentially lawmakers, that this isn’t the case).

But while criticizing the event uncertainty, nobody discussed the design of the arrays used to capture the presented data. This, I would argue, is the single, most important factor in deciding if 700 m accuracy is really terrible, quite reasonable, or perhaps even impressive. Yes, picking error and velocity model assumptions contribute to location accuracy, but processing doesn’t mean anything if you don’t have adequate data to process.

We are quite good at figuring out the lateral positioning of event, for the very reason that it is easy for us to put out a spread of sensors on the ground over the affected area. Depth is tricky because it is very difficult to fill an underground volume with sensors – the notable exception being underground mining where sensors can be placed in a 3D volume throughout the mine.  We get around this by putting more and more sensors on the ground in a targeted area – the better accuracy we want, the more sensors we need (think satellites to get a GPS location). Surface arrays with a hundred to thousands of sensors are routinely used to measure fracturing in rock as loud as my fingers typing on this keyboard. Arrays built to monitor for induced seismicity typically use fewer stations because the events are bigger and easy to ‘hear’.

Let’s get back to the question – how good is 700 m uncertainty? The short answer? It’s a question of scale. Let’s consider an array of 5 sensors placed within a 5 km radius of a multilateral pad (ie, many horizontal wellbores). Because the sensors are concentrated where the event is likely to be, we should be able to come up with a good location. In this case, a solution with 700 m resolution would leave us with more questions than answers and may not meet certain government regulations.  So 700 m isn’t great.

Now take those 5 sensors and spread them all over Alberta. The uncertainty is now on the scale of many kilometers because the sensors are far from the event (just like it’s harder for our ears to pinpoint the source of a noise that is far away). So a solution with 700 m uncertainty would be astonishingly fantastic! In fact at this scale seismologists often have to use a fixed depth to get an event epicentre (something to remember the next time you are perusing NRCAN’s Earthquakes Canada site).

What would it take to get that multilateral pad case from 700 m to 10 m? It takes substantially more sensors and a lot more money, which isn’t a scalable solution for companies in the best of times. That scale of uncertainty is even a challenge for denser microseismic monitoring of any kind.

If 700 m seems so outrageous, or even 500 m as is often used as a design target, are we under-designing our arrays or are we simply expecting too much from the arrays we opt to put in the ground?

* Special thanks to Mrs. Mamer who shared this article on our blog.

Why microseismic monitoring is not dead (yet): Managing Expectations

Why microseismic monitoring is not dead (yet): Managing Expectations

The original article was published in LinkedIn on Jan 8, 2017.

Pierre F. Roux

Research and Technology Team Lead at Baker Hughes, LinkedIn Profile

As the industry slowly recovers from an unprecedented downturn, it has become clear that the U.S. unconventional plays have become a key player in the worldwide upstream sector. And with it, there’s a cohort of technologies that should enable a leap in ROIs and unlock these resources across the world through a better understanding of what is at stake.

As 2017 opens, it felt it was a good time to reflect on the place “passive monitoring” currently has in the O&G industry, and why it seems it will (or at least, should) play a growing role in the exploration and improvement of unconventional resources.

The Growing Importance Of "Passive Seismic"

First and foremost, some figures: there has been 7 sessions on passive seismic during this year’s SEG Annual Meeting, omitting the papers that dealt with acquisition and that were not scheduled in passive-seismic dedicated sessions (such as Jia et al. [2016]’s on surface patch acquisition design) and the special session on Induced Seismicity and two dedicated after-event workshops. This proves, if need be, how passive seismic in general – and microseismic monitoring in particular – has become a central geophysics topic, at least in North America.

Yet, microseismic monitoring has suffered from the downturn to a great extent, as operators (and particularly the stimulation and reservoir engineers in the room) have been steering away from it. Indeed it wouldn’t give them more information than they already had. The promises microseismic monitoring held haven’t been met, and the industry failed at managing non-specialists’ expectations.

In other words: microseismic monitoring has been way oversold.

Understanding the Physics: Do Not Overlook The Hydraulic Fracture!

The past couple of years have seen a growing body of researchers trying to better understand what drives microseismicity. It is now clear that it isn’t as simple as was initially thought: there is a plurality of mechanisms that are not clearly understood or even known (aseismic slip, plasticity, etc.), and many of these may have a different weight in the overall process depending on the basin or even where we stand along a lateral. In fact, because the primary goal was to sell a value proposition, most companies have overlooked what should be at the heart of our activity: trying to understand the many observations we have.

First articles published on hydraulic fracture monitoring hypothesized that microseismic events would be generated by mode I mechanisms (i.e. the opening of said fracture); yet, the ratio of S- to P-wave amplitudes clearly indicated those events where caused by shear failure rather than tensile failure [Pearson, 1981]. Further to this, the energy released via seismicity is infinitesimal when compared to the energy input for hydraulic fracture creation and propagation [Goodfellow et al., 2015].

Along the same line of thought, take the concept of SRV (Stimulated Reservoir/Rock Volume) as defined by geophysicists (and not reservoir engineers). It was thought to be a measure of the contacted reservoir, thus providing information on productivity of a given well. Yet it has proven to be particularly off is many instances – take a look at [Cipolla and Wallace, 2014] for a detailed and fascinating discussion on this topic.

This proves, if need be, that microseismicity is the expression of the interaction between the hydraulic fracture propagation and the formation – and not a measure of the hydraulic fracturing processes per se.

Admitting Our Ignorance

The O&G industry has been pushing service companies and specialists to rationalize microseismicity and to make it “an engineering tool” able to provide qualitative information on the stimulation and the resulting production, all of it as a standalone measurement.

I would claim that, given our (current) limited understanding of the mechanisms driving microseismicity, it is unreasonable to build complex, engineered interpretations solely using microseismic information. Ongoing research will however strengthen our understanding, up to a point where, maybe, we will be able to use it to its full potential.

Accepting What Microseismicity Is NOT

Accepting that microseismicity (1) isn’t self-sufficient and (2) that it needs to be integrated with other measurements (such as image logs to infer fracture density, near-wellbore acoustic images of fractures, seismic reservoir characterization, etc.) into a single framework (a full-blown geomechanical model) should highlight that microseismic monitoring remains the only far-field, real-time measurement of hydraulic stimulation.

As such, it should definitely be a need to have rather than a good to have.

References

Cipolla, C., and J. Wallace (2014), Stimulated Reservoir Volume: A Misapplied Concept?, in SPE Hydraulic Fracturing Technology Conference, edited, Society of Petroleum Engineers, The Woodlands, Texas, USA.

Goodfellow, S. D., M. H. B. Nasseri, S. C. Maxwell, and R. P. Young (2015), Hydraulic fracture energy budget: Insights from the laboratory, Geophysical Research Letters, 42(9), 3179-3187, doi: 10.1002/2015gl063093.

Jia, T., C. Regone, J. Yu, A. Gangopadhyay, R. Pool, C. Melvin, and S. Michell (2016), Microseismic surface patch array: modelling and velocity estimation using ambient noise, in SEG Annual Meeting, edited, Society of Exploration Geophysicists, Dallas, TX, USA.

Pearson, C. (1981), The relationship between microseismicity and high pore pressures during hydraulic stimulation experiments in low permeability granitic rocks, Journal of Geophysical Research: Solid Earth, 86(B9), 7855-7864, doi: 10.1029/JB086iB09p07855.

* Special thanks to Mr. Roux who shared this article on our blog.