0% found this document useful (0 votes)
386 views10 pages

3D Seismic Survey Design

This document describes a new 14-step methodology for optimizing 3D seismic survey design. The key steps are: 1) Determine the maximum frequency needed to resolve the thinnest target formation based on well logs and synthetics. 2) Estimate the average quality factor Q between surface and target using spectral ratios from zero-offset VSPs. 3) Construct graphs of available frequency versus time/depth based on Q to determine the maximum usable frequency at the target. The methodology aims to design a survey that can accurately record the seismic signal in the target formation while attenuating noise.

Uploaded by

Mahmood Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
386 views10 pages

3D Seismic Survey Design

This document describes a new 14-step methodology for optimizing 3D seismic survey design. The key steps are: 1) Determine the maximum frequency needed to resolve the thinnest target formation based on well logs and synthetics. 2) Estimate the average quality factor Q between surface and target using spectral ratios from zero-offset VSPs. 3) Construct graphs of available frequency versus time/depth based on Q to determine the maximum usable frequency at the target. The methodology aims to design a survey that can accurately record the seismic signal in the target formation while attenuating noise.

Uploaded by

Mahmood Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

A new methodology for 3D survey design

Mike Galbraith, Seismic Image Software – A Division of GEDCO, Calgary, Alberta, Canada

Introduction
There are two main problems in designing a 3D survey. Firstly it is necessary to establish a geometry which will handle signal
correctly - in terms of resolution and amplitude fidelity. Secondly the same geometry must somehow attenuate various types of
noise which will be present. These two goals (record signal in an optimum way and attenuate noise as much as possible) can be
achieved in a design process which is described below – and which may be applied to any 3D survey no matter how complex the
sub-surface is, or where the survey is located.

There is nothing intrinsically wrong in traditional methods of 3D design (Cordsen et al, 2000), but we will see below that much
more is possible - in terms of defining our goals more exactly and “engineering” a 3D which is matched to each of those goals.

The method is very general and may be applied successfully to any geometry – land, marine and OBC, although some of step 9
and all of steps 11 through 14, as described, pertain more to land and OBC geometries than to marine (parallel) geometries.
Nonetheless, steps 11 and 12 still apply to marine geometries with small changes to accommodate the line spacings employed
there. The analysis of noise and migration impulse response is just as important to a marine survey as it is to land.

Note that the aerial geometries which are currently employed for some marine 4C acquisition may also be designed by this
method. In this acquisition style, receivers are spaced infrequently (for example every 400m) along two orthogonal directions
(i.e. two sets of receiver lines) while shots are taken often (e.g. every 50m) along lines which have the same spacing apart (50m).
Thus there is a very dense grid of shots and a very sparse grid of receivers. The calculations below for bin size etc. will be based
solely on the inline and crossline shot spacing (which should be equal). The “shot” and “receiver” line spacing calculations (step
9 below) are based on the inline receiver spacing and the crossline receiver spacing.

Clearly the method is intended for mature exploration areas. It requires existing well-logs (sonic and other petrophysical
information), VSPs, and 2D or 3D data in the same area. If these are not available, the important parameters of Fmax, bin size,
Fold, Xmin and Xmax should be estimated by some other means (e.g. geological and geophysical modeling). In this way, the
method still provides a general framework for successful 3Ddesign.

Method
The 14 steps to finding an optimum geometry can be stated as follows:

1. Determine the maximum frequency required to


resolve the target formation thickness – from
synthetics derived from well logs. This frequency
is the one necessary to resolve the thinnest bed
(formation interval) of interest. Many past papers
have described how such a maximum frequency
(or sometimes total bandwidth) may be used to
calculate the “tuning thickness” of a thin bed
where this can be much less than the classical Resolved Rayleigh’s Unresolved
Rayleigh resolution (as shown in Figure 1 and Criterion
equal to the quarter wavelength of dominant
frequency). The survey designer should choose the
method most appropriate to the area in question.
Whether the Rayleigh criterion or a “tuning Figure 1
thickness” approach is used will depend on how
the designer feels the survey can meet the conditions needed for the successful application of each criterion. In most cases, this
will involve the S/N ratio of the expected data and how much tolerance each approach can provide. Thus the Rayleigh criterion,
while relatively unsophisticated, may be more tolerant of variations in S/N than a “tuning thickness’ approach.

Given a maximum frequency (we call this Fmax).and assuming a symmetrical wavelet (from zero frequency to some maximum
frequency) such as a Ricker, we can approximate the dominant frequency to be half the maximum frequency. Conversely if we
only know the dominant frequency (because there is no other information available), we approximate Fmax to be twice the
dominant.
2. Estimate average inelastic attenuation Q (the quality factor)
over the interval from surface to target –using the log spectral Raw VSP traces
ratio of downgoing wavelets from zero offset VSPs – or any
other method which can derive a stable value for Q. The
spectral ratio approach is more accurate than others in noise- First break wavelets - downgoing
free cases (Tonn, 1991). This method is explained below.

The first break wavelets on a VSP represent the downgoing


wavelet. As such, each wavelet recorded on geophones at Amplitude spectra - downgoing
increasing depth is subject to attenuation Q (sometimes called
the Quality factor). To estimate the average Q, we focus on the
difference between two wavelets (geophones) positioned near
the surface and near the target respectively. The amplitude
spectra of the two first break wavelets will be used to calculate
Q between the two geophones (levels) – Figure 2.

Figure 2
The basic equation defining Q is: At2(f) = At1(f) . exp( -πft/Q), where
At1(f) is the spectrum of the wavelet at time “t1”
At2(f) is the spectrum of the wavelet at time “t2”
“f” is any frequency
“t” is the time difference between the two wavelets = t2-t1
From this equation, we may derive an expression for the ratio of the logarithms of two spectra at two different geophone levels in
the VSP. It turns out that this ratio is linear in 1/Q. Thus the method involves linear regression fitting to the log spectral ratio
between some chosen low and high frequency. The slope of the fitted regression line then gives the value for Q. Note that this Q
value is essentially an average of the earth attenuation effects that occur between the time of the first wavelet and the second. It is
important to use a frequency range (f1 to f2) where the behavior of the two log spectra – at the two levels – is more or less linear.

3. From the estimated Q value, graphs may be constructed (an example is shown in Maximum Frequency vs Time assuming Q=200

Figure 3) showing available frequency vs time or depth. These graphs are determined
160
by taking into account spreading losses, transmission and reflection losses – and the 140

inelastic attenuation (Q). Once a high frequency signal has fallen to approximately 120

110dB or more below its near surface amplitude, it can be considered lost, because it 100
Frequency (Hz.)

will be quantized using 5 or fewer bits. (This is because a 24 bit A/D recorder has a 80

maximum dynamic range of 138dB, so if the near surface amplitude is recorded at 60

full scale, an amplitude which is 110 dB down will have a dynamic range of only 40

28dB – or approximately 5 bits). Operations such as deconvolution will severely 20

distort such an inadequately quantized signal. Note that the application of pre-amp 0
1.5 2.5 3.5 4.5 5.5
gain (from 0 to 60dB can be used to “move” the 110dB useful recording range over Time (s)

the areas of interest (e.g. shallow horizon to target) – thus keeping good quantization
levels over all horizons (reflectors) of interest. Figure 3

4. Using petrophysical information (e.g. cross-plots of acoustic impedance vs porosity), establish


Impedance vs. Porosity
the criteria for detectability – which is the smallest change that we wish to see at the target level.

An example of such a cross-plot is shown in Figure 4. The samples of well logs throughout a
chosen interval (usually the target formation), are plotted using Porosity for the X-axis and
Acoustic Impedance for the Y-axis. A line is normally fitted through these samples by linear
regression to show the average change of porosity vs. acoustic impedance in the reservoir unit
(target).

For example a 5% change in porosity may show up on a seismic trace as an 8% change in


acoustic impedance. (Recall that the samples of a seismic trace have amplitudes proportional to
the true values of acoustic impedance - provided that careful preservation of amplitudes is
observed throughout seismic data processing!) If the seismic noise level is higher than this value,
then we will not be able to detect the change. Hence we can establish the desired S/N at the target. Figure 4
5. From the study of total attenuation (point 3 above), we can evaluate the maximum frequency we will be able to see at the
target. This may be less than Fmax (point 1 above). In such cases, there is no alternative but to accept this new (lower) Fmax -
because the earth itself will prevent us from acquiring any higher frequencies at the target. The only way to increase this actual
Fmax (as shown in Figure 3) is to use a wider dynamic range for the recording instrument - or a narrower dynamic range centered
on the target using pre-amp gain or some other technique.

Now we can calculate the required source strength. In a marine environment, the
calculations are straightforward. The ambient noise level is usually well known (so
many microbars of noise) and the source strength is also measured in similar units–
so many bar-meters. (See Figure 5). Thus once the attenuation of the detectability
criteria is known, this can be used to calculate the required source strength to keep
the signal above the ambient noise level. On land the situation is not so simple and
field tests are usually necessary to establish the desired source strength – which is
the source that gives the desired Fmax at the target.

Figure 5 - after Dragoset, LE Aug 2000

6. In this step we assume the presence of random noise in the raw data. Should there be added noise which is coherent (e.g. linear
shot noise or some other form of coherent trace to trace noise) this will complicate the design problem unless such noise can be
eliminated in the field through the use of arrays, or in processing by some multi-channel technique. Nonetheless the method of
determining S/N using auto and cross-correlations described below will include some amount of this coherent noise (particularly
when the noise is limited to a small time window and the moveout between noise and signal is very different). If moderate to
severe coherent noise is expected, the steps below to calculate fold must be strictly followed and no variation of the calculated
fold should be tolerated.

Estimate the expected S/N of raw shot data. This can be done either directly on some typical test shots, or by dividing the S/N of
a stack (or migrated stack) by the square root of the fold used to make this existing stack. Since

Fold = (S/N of final migrated stack / S/N of raw data)2, then S/N raw = S/N migrated/(Fold)0.5.

Using an existing migrated stack has the advantage that the S/N improvement due to processing is taken into account. Thus the
subsequent calculations for the desired fold to achieve the desired S/N will be more realistic.

S/N values may be calculated as follows:


The zero lag value of the auto-correlation of a trace is the sum of the zero lag values of the auto-correlation of the signal and the
zero lag value of the auto-correlation of the noise - or more simply put:
Auto-correlation (AC) = Signal2 + Noise2

In a similar fashion (assuming that noise is not correlated from trace to trace) we can write:
Cross-correlation (XC) = Signal2

Thus, in equation form, AC/XC = (S2+N2) / S2


Re-arranging we get:
S/N = 1 / sqrt( (AC/XC) - 1)

Thus we can calculate the signal to noise ratio by assuming that two neighboring traces have identical signal and different noise.
We use the ratio of the average zero lag value of the two auto-correlations of two traces and the zero lag value of the cross-
correlation of the same two traces. In practice, we sum many of the auto and cross-correlations together to obtain a good average.
Typically this calculation is repeated in a series of overlapping time and space windows to obtain a complete set of “S/N traces”
matching the input data traces.
When nothing is known about the area a useful rule of thumb
is to require that the stack S/N=4. Any lower level on a final S/N = 8 S/N = 5 S/N = 3
migrated stack will normally mean that the interpreter will
have severe difficulties in identifying potential targets. Any
higher level can be noted as an added bonus.

An example of a portion of a migrated stack with three


different levels of S/N is shown in Figure 6. The stack labeled
as S/N=8 is the original data. The other two were created by
adding random noise until the stated S/N ratios were achieved.
The difference between a S/N level of 8 and 3 is obvious.

Figure 6

Note that S/N changes with frequency. Thus the S/N for a low frequency range is generally higher than the S/N for high
frequencies. If high frequency is important for detectability, then fold may have to be increased to achieve the desired S/N at the
highest frequency range.

7. From the desired S/N (point 4 above) and the estimated S/N of
the raw data (based on the fold of an existing data set - point 6 Example Fold Calculation
above), we determine the required fold of the survey under
design. Desired acoustic impedance detectability = 8%
Hence required S/N = 100/8 =12.5
Thus: Foldrequired = (S/Nrequired / S/Nraw)2
S/N of raw data =1.0
An example of a fold calculation is shown in Figure 7. Note we (calculated from migrated stack of existing data)
have assumed that a S/N calculation as explained in point 6 was
done and that the result showed that S/N of raw data would be Hence Fold required = (12.5 / 1.0)2 =156
exactly 1.0. Based on this assumed value of the raw S/N and the
required final S/N of 12.5, the required fold is 156. Figure 7

Note that the calculation above derives the required fold at the target. This target fold is often less than the full fold of the survey.
This is especially true when we have a shallow target (where long offset traces will be muted and will not contribute to the target
fold). It is critical that the design should satisfy fold requirements – and therefore S/N requirements - at the target!

Some surveys are designed for pre-stack analyses such as AVO. In such cases, the calculation above (which was based on S/Nraw
as determined from a stack or a migrated volume) will not be adequate. Various authors (e.g. Cambois, LE 19, no. 11) have
pointed out the need for additional S/N requirements when determining such things as AVO parameters from cross-plots. In
particular, Cambois states that an additional 3.5dB improvement in S/N is required beyond the stack level, for accurate
calculation of the AVO intercept - and the much larger value of 15dB (at least!) for the AVO gradient. Note that a S/N
improvement of 6dB (twice the S/N) requires a four-fold increase in the stack fold. Thus, 3D design for the determination of
accurate AVO parameters can be expensive!
8. Next the required bin size is calculated. Here we assume equal inline and crossline bin size. Note that if they are not equal, the
migration impulse response will be different in the two directions (inline and crossline) and hence the resolution will also be
different. Unequal bin sizes are therefore not recommended.

Using Fmax (the required maximum frequency), we can calculate the horizontal and vertical resolution. (Vermeer, 2002).
Rx = Vrms x 0.715
2 x Fmax x sin(θmax) x cos(i)

θmax here is a measure of migration aperture and “i” is the half angle subtended
by any shot- receiver pair subtended at the target (see Figure 8.1). Because many
shot-receiver pairs contribute to resolution (through pre-stack time or depth
migration), the value of θmax is different for each pair. Thus the largest such
angle (shown in Figure 8.1 as “actual line for largest θmax”) should be used in the
calculation for resolution. In the same way (many shot-receiver pairs) the value of
“i” used is typically an average angle corresponding to an average shot-receiver
offset. In the best case (maximum available migration aperture – and zero offset
shot-receiver pairs), the resolution is approximately one quarter wavelength of the
maximum frequency. This resolution (based on the migration aperture) is also
equal to the bin size that will properly record the maximum frequency from a
maximum dip angle (90 degrees).
Figure 8.1

In many 3D designs, the chosen bin size is not equal to the horizontal resolution (Rx), based on Fmax from point 1. If the bin
size is chosen to be larger than the horizontal resolution, then Fmax (the highest frequency available at the target) may now
depend on the target dip angle. Thus for steep dips, the choice of bin size will limit the maximum frequency which is not aliased
and which, therefore can be included in a migrated image. Then the resolution depends on this new frequency, called Fmaxunaliased
in Equation 1below, and which may (or may not) be smaller than Fmax depending on the angle of dip.

It is important to understand that resolution depends on a maximum available frequency – Fmax. And to understand that bin size
may limit this frequency on targets beyond certain dip angles (maximum unaliased frequency).

The relationship between the target dip angle (θmaxdip), velocity (Vrms), maximum unaliased frequency (Fmaxunaliased) and bin size
(∆x) is given by:

∆x = Vrms / (4 x Fmaxunaliased x sin (θmaxdip)) -Equation (1)

Thus the optimum bin size to use for a dip of 90 degrees is given by Vrms / (4 x Fmaxunaliased) – or one quarter of the wavelength of
the maximum unaliased frequency. Again, note that if Fmaxunaliased = Fmax, then the bin size is equal to the resolution.

In practice, this is often relaxed (a larger bin size is used), since it is impractical (not to mention very expensive), to measure
every dip with the desired maximum frequency (Fmax). As an example, a velocity of 3000m/s and a frequency, Fmaxunaliased, of
60Hz leads to an optimum bin size of 12.5m – considerably less than is used on most land surveys today. A more typical
calculation might say that we wish to measure up to an Fmaxunaliased of 60Hz on dips of 30 degrees or less. This will relax the bin
size needed to 25m.
In Figure 8.2, an example of a cross plot of (Bin size, Vrms) vs.
frequency (Fmaxunaliased) is shown. Thus the color is Fmaxunaliased.
The dip angle (θmax) is fixed at 30 degrees. The values of
Fmaxunaliased are based on equation 1 and on Figure 3 (Fmax vs. Frequencies
time) above and shows how the frequency varies with velocity for Frequencies
in this area in this area
a constant bin size (horizontal line across the plot). The increase in are limited by are limited by
velocity can be related to an increase in time or depth and the the choice of Attenuation
figure may be interpreted as showing the available Fmaxunaliased on a bin size. (Q) – see
dip of 30 degrees at increasing depths – for different choices of bin Figure 1
size.

Figure 8.2 - Fmax vs (Bin Size and Vrms) for θmax = 30o

The choice of maximum frequency (Fmax from point 1) and maximum unaliased frequency (Fmaxunaliased above) – and hence bin
size, is critical. It must be practical – in other words, the highest frequency that can be reasonably propagated from the surface
source to the dipping target – and back again to the surface receiver. And it should be close to the requirements – thus Fmaxunaliased
on the steepest expected target dip should be comparable to the choice of Fmax in point 1.

If Fmaxunaliased is too high, then the consequent choice of bin size will be too small – and money will be wasted trying to properly
record frequencies that are not available due to attenuation (Q). Conversely if Fmaxunaliased is too low, the bin size will be too large
and high frequencies coming from dipping events will be aliased and will not contribute to the final migrated image. This second
case is, in fact, standard operating procedure in many parts of the world. In other words, most surveys are under sampled!

Thus smaller bin sizes will normally improve the frequency content of dipping structures – but only to a certain limit which is
that imposed by the earth as the total attenuation of the higher frequencies falls below the level where we can properly record
them (e.g. see Figure 3).

9. Determine the minimum and maximum offsets (Xmin and Xmax). These are normally
calculated from muting functions used in processing – or automatic stretch mutes derived from
velocities. See Figure 9.
Shallow Target
A rule of thumb here is to use a stretch mute of the order of 20 to 25% - or even as high as 30%
if long offsets are critical to success. The minimum offset corresponds to the shallowest target of
interest – and the maximum offset to the deepest target of interest.

Xmin will be used to determine approximate shot and receiver line spacings (equal to Xmin
multiplied by the square root of 2, for single fold at the shallowest target and equal shot and Xmin
receiver line spacings). And Xmax will be used to determine the total dimensions of the
recording patch.
Xmax

Main Target

Figure 9
10. Migration Aperture:
Each shot creates a wavefield which travels into the sub-surface and is
reflected upwards to be recorded at the surface. Each trace must be recorded
for enough time so that reflections of interest from sub-surface points are
captured – regardless of the distance from source to sub-surface point to
receiver. And the survey itself should be spatially large enough so that all
reflections of interest are captured within the recorded area (migration
aperture). For complex areas, this step may require extensive 3D modeling.

Figure 10.1 shows an example of a model built for a complex sub-surface


area. The section shown is an “in-line” display. The cross-lines showed the
true 3-D nature of the model. Such models can be ray-traced to create
synthetic 3D data volumes. The complex data resulting from such ray-
tracing can be created with the correct times and amplitudes. This enables
an investigator to observe the effects of processing – particularly PSDM
(Pre-Stack Depth Migration) which incorporates the surface topography. By
such means, the degree of illumination on any chosen target can be
determined. In complex sub-surface areas, ray-tracing like this can establish
the “visibility” or otherwise of a target for any specified 3D acquisition
geometry.

Figure 10.1
The migration apron (amount to add to the survey to properly record all Target elevations Migration apron
dipping structures of interest at the edges - Cordsen et al, 2000) is
normally calculated from a 3D “sheet” model of the target. Thus a
colored target display as shown in Figure 10.2 (where the colors are the
values of the migration apron), allows us to see how much to add on
each side of the proposed survey. This gives the total area of shots and
receivers. The conventional calculation for a migration aperture
involves the total area of information available – which is normally
symmetric. Thus the “migration apron” added to the edge of the survey Figure 10.2
is actually half the size of this conventional calculation for a “migration
aperture”.

11. Now various candidate geometries can be developed. The critical parameters of bin size (point 8 above) and fold (point 7
above) and Xmax (point 9 above) should not be changed if possible. The shot and receiver intervals (SI and RI) are, of course,
simply double the required bin size. Thus the only real flexibility is to change the shot and receiver line intervals (SLI and RLI).
But we must have Xmin2 = SLI2 + RLI2, (assuming that the layout of shot and receiver lines is orthogonal.). In most cases, it is
not difficult to come up with a number of these “candidate” geometries – which all have values close to the desired fold and bin
size – and meet the requirements of Xmin and Xmax. An example is shown below in Figure 11. Note that a simple cost formula
applied at this stage can be effective in assisting with the choice of candidate geometries.

Fold can vary considerably if line intervals and patch sizes are changed too much. Recall that fold determines S/N and so it may
be necessary to compromise when lower fold geometry brings significant cost savings.
Figure 11

Typical variations that can be tried are small changes of line intervals (SLI and RLI), depending on whether shots or receivers are
more expensive. Thus in heli-portable surveys over mountainous terrain, shots are usually much more expensive than receivers –
therefore we make SLI as big as possible to minimize the number of shots. In OBC surveys, receivers are much more expensive
than shots (air-guns). Therefore we make RLI as big as possible.

In all cases of orthogonal geometries, it is not wise to stray too far from shot and receiver symmetry. As the lack of symmetry
increases, the shape of the migration impulse response wavelet will change – leading to undesired differences in resolution along
two orthogonal directions.

Many surveys use different shot and receiver group intervals. Such practices will cause different resolutions in the shot line and
receiver line directions. Thus a true 3D structure will not be equally imaged in all directions. Interpolation will NOT correct these
problems. What has not been measured in the field (small spatial wavelengths in both directions) cannot be recovered by
processing techniques.

12. The candidate geometries can each be tested for their response to various types of
noise – linear shot noise, back-scattered noise, multiples and so forth. They can also be
tested for their robustness when small moves of shot lines and receiver lines are made to
get around obstacles. And finally they can also be tested for the best image (migration
response) by calculating the PSTM response wavelet at the target. The “winning”
geometry will be the one that does the best job of noise attenuation and has a reasonably
symmetrical and focused PSTM response.

“Footprints” can be checked by stacking each “offset trace” in each bin of a 3D. The
“offset traces” are chosen from a well sampled (in offset) CMP - or series of CMPs taken
from any available 2D or 3D data. An example of “offset traces” is shown in Figure 12.1.
Figure 12.1
For the 5 candidate geometries in Figure 11 above, the SLI=200, RLI=200 SLI=200, RLI=250 SLI=200, RLI=300
stacks of the traces in a single “box” of each geometry
were calculated. A “box” is the area of CMPs between two
adjacent shot lines and two adjacent receiver lines. The
time slices of a single event at 1.508 secs (see traces in
Figure 12.1) are shown in Figure 12.2. The color is the
stack amplitude in each CMP bin and is clearly different
for the 5 geometries.

SLI=250, RLI=200 SLI=300, RLI=200

Figure 12.2

And in Figure 12.3 we show a zoom of the actual stack


traces. The differences can be seen repeating in cycles SLI=200, RLI=200 SLI=200, RLI=250 SLI=200, RLI=300
corresponding to the width and height of the “box” - as
measured in number of CMPs. The differences are greater at
shallow times than at deeper times because fold is less at
shallow times and consequently offset distributions differ
more from one CMP to the next.

SLI=250, RLI=200 SLI=300, RLI=200


Figure 12.3

In a real 3D design analysis, the geometry with the smallest variation in amplitude is normally chosen - since this will have the
smallest footprint.

And finally in Figure 12.4 we show a plot SLI=200, RLI=200 SLI=200, RLI=250 SLI=200, RLI=300
of the PSTM responses for candidate
Geometries 1, 2 and 3 in Figure 11 above.
There is some asymmetry when the line
spacings are not equal, but it is not large in
terms of the dominant central spike (CMP
with red color). Note we only show one
quarter of the response, since the other 3
quarters are mirror symmetries of this one.
Figure 12.4

13. Acquisition logistics and costs may now be estimated for the “winning” geometry. Depending on the result (e.g. over or under
budget) small changes may be made.

If large changes are needed (almost always for economic reasons), the usual first casualty is Fmax. Thus dropping our
expectations for high frequencies will lead to larger bins which will lead to a cheaper survey. Another possible casualty is the
desired S/N – or in other words – using lower fold. If such changes are made, the designer will have changed the fundamental
requirements of the survey (notably Fmax and required S/N). Thus the modified survey WILL NOT ACHIEVE THE
INTENDED RESULTS – AND MAY BE A COMPLETE FAILURE!
14. It is often the case that there will be unanswered questions after the design is finished. Field tests can resolve these final
problems. For example, dynamite shot costs depend critically on the depths of the shot holes. Only a series of field tests can
properly answer the question of the optimum shot hole depth – and charge size.

Thus field tests conducted before the main survey can be used to answer such things as:
Source choice (depth of hole/charge size, vibrator parameters – ground force, sweep frequencies, arrays etc.)
Receiver choice (buried or not – geophone type, etc.)
Arrays – both shot and receiver – to suppress shot-generated surface noise.
Recording Gain – to optimize sampling of frequencies at target levels.

Conclusions
The primary goal of any 3D is to achieve the desired frequency and S/N at the target!

We have presented a method to “calculate” such a geometry from first principles. A summary of the method is shown in Figures
C.1 and C.2.

Well 7.
1. Determine Fmax
Logs

8. Calculate bin size to satisfy expected Fmax, resolution needed and expected dip.
VSP 2. Estimate Q
Ma xi mum F requ en cy vs Ti me a ssumin g Q= 20 0

9. Determine Xmin and Xmax


160

140

120

3. Calculate F vs T attenuation
100
Frequency (Hz.)

80

60

40

20

0
1.5 2.5 3.5 4.5 5.5

Time (s)

10. Calculate Migration apron


4. Calculate Source strength required and Fmax at target

11. Formulate suitable candidate geometries (Bin size, fold, min, Xmax)
5. Determine the change to be detected.
e.g. from a Cross plot of Porosity vs Impedance
12. Test geometries for minimal footprint and PSTM response
6. Calculate S/N of raw data from migrated stack

13. Ensure geometries meet logistical considerations


7. Calculate fold required to satisfy S/N in “5”

14. Perform field tests for final parameters


8.

Figure C.1 Figure C.2

Note that noise attenuation can be just as important as signal. It is worth remembering that if the CMP stack for one geometry
attenuates the noise by 6dB when compared to the CMP stack for another geometry, the fold has been effectively quadrupled.
This can have a dramatic effect on the budget!

To ensure the best image, the best sampling method that can be used to recreate the various spatial wavelengths in X, Y and Z
should be chosen. G. Vermeer has written the book on this approach (Vermeer, 2002) and it involves symmetric sampling –
whatever is done to shots must also be done for receivers. This approach is certainly desirable – but not always feasible for
various reasons such as logistics, budget, environment etc. Inevitably compromises are often necessary.

Some of the best geometries for noise attenuation are wide azimuth orthogonal surveys where the shot line interval is not quite
equal to the receiver interval (e.g. a ratio of 4/5 instead of 1/1). Wide azimuth slanted geometries, with angles between shot and
receiver lines such as 18.435 or 26.565 degrees can also be successful in attenuating noise. The small departure from orthogonal
(18 or 26 degrees instead of zero) has only a very small effect on the imaging properties.

Budget? Be prepared to spend some money! There is nothing as expensive as a 3D that cannot be interpreted!

Suggested reading. Cordsen, A., Galbraith, M., and Peirce, J., 2000, Planning land 3-D seismic surveys: Geophysical
Developments No. 9, Soc. Expl. Geophys., 204pp.
Tonn, R.,1991, The Determination of the Seismic Quality Factor from VSP data: A comparison of different computational
methods.: Geophysical Prospecting 39, 1-27, 1991.
Vermeer, G.J.O., 2002, 3-D seismic survey design: Geophysical References Series No. 12, Soc. Expl.. Geophys., 205pp.
Corresponding author: Mike Galbraith, mgalbraith@gedco.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy