0% found this document useful (0 votes)
25 views297 pages

Volume Attributes Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views297 pages

Volume Attributes Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 297

HRS 10.

6 Volume Attributes Guide 1

Guide to Volume Attributes

Introduction

Volume Attributes is an option within the Geoview program from Hampson-Russell Software
which can be used to create single-trace and multi-trace spatial attributes from seismic volumes.
The Volume Attributes also includes multi-trace filters, such as the edge-preserving filter, to help
precondition the seismic data prior to generating the attributes and quality controls such as the
Bandwidth attribute. Unlike most Hampson-Russell programs, which require both well data and
seismic data, the Volume Attributes set of programs only requires post-stack seismic volumes as
input, although well logs can certainly be used for identifying prospects.

The following menu shows the available options in Volume Attributes:

The output data from Volume Attributes are a series of volumes, containing attribute transforms
of the seismic volumes, which give rich detail about the structures found in the seismic data.

This tutorial takes you through the process of applying Volume Attributes on a data set from the
North Sea, which exhibits a lot of faulting and steeply dipping structures. The typical workflow
consists of the following steps:

1) Start Geoview
2) Load seismic data
3) Load horizons (if available) into Geoview or pick horizons.
4) Condition the input data with some type of edge preserving filter to enhance data quality.

November 2018
HRS 10.6 Volume Attributes Guide 2

5) Run a variety of single-trace and spatial attributes and display the results.

In this tutorial, we will go through each of these steps. We will use the new Advanced 3D
Visualization tool in HampsonRussell to view the Volume Attributes as 3D datasets, as 2D
profiles, and as time/horizon slices. The Advanced 3D visualization is also used to perform
RGB blending of multiple spectral volume attributes.

The interpreter will notice that it is possible to generate numerous volume attributes and the
question will arise how to interpret these multiple volumes. For those interested it might prove
useful to review the Emerge and ISMap Guides to learn how to analyze and interpret multiple
volume attributes quantitatively using well control.

Note that this tutorial tries to avoid discussing the theory behind these Volume Attributes to
focus more on their use and application. However, if you are interested in the theory behind the
program we have included several Appendices at the end of the guide that discuss the key
concepts.

A Note about the Seismic Data

The data used in this tutorial is a subset of a structurally complex seismic volume over the F3
Block of the Netherlands North Sea. It has been graciously provided to us by Dr. Paul de Groot
of dGB Earth Sciences. As Dr. de Groot states in an email to our company:

“The F3 data set is released under the Creative Commons License Agreement. That is a license
for sharing data and information under similar terms and conditions as open source license
agreements like GPL. The essence of the Creative Commons License agreement is that the data
can be used for anything that in general terms is considered to be a good thing. Developing new
technology and passing on the information fits that category. The data is distributed under the
Creative Commons License Agreement http://creativecommons.org/licenses/by-sa/3.0/.”

That is, feel free to use this data as part of this tutorial or in any training exercises. However, we
ask you not to use this data for monetary gain.

This is a smaller subset of the original dataset so that the examples would not take up excessive
run time. However, anyone running this tutorial is welcome to visit the dGB Earth Sciences
website to download the full dataset.

Also, this dataset was pre-processed using the Insight Earth® Footprint Removal Workflow,
which involved:

1. Structure tensor data analysis involving a volumetric computation of structural strike and
dip at each sample in the volume.
2. Iterative footprint removal of five separate wavelength and azimuth orientations.
3. Final 3 inline x 3 crossline x 1 sample median filter.

For more information on this process or for purchase of the software itself, contact our sister
CGG GeoSoftware company Insight Earth.

November 2018
HRS 10.6 Volume Attributes Guide 3

Starting Geoview

To start this tutorial, first start Geoview. On a Linux workstation, do this by going to a
command window and typing geoview or by clicking the Geoview icon on your desktop if one
has been set up.

When you start Geoview, the first window that you see contains a list of projects previously
opened. Your list will be blank, like the example below, if this is the first time you have ever run
Geoview.

November 2018
HRS 10.6 Volume Attributes Guide 4

For this tutorial, we will start a new project. First, set all the data paths to point to the location
where you have stored the tutorial data. To do that, click the Settings and Path tabs:

Now you can see the default locations for the Data Directory, Project Directory, and
Database Directory. We would like to change all of these to point to the directory with the
tutorial data.

To change all of the directories to the same location, click the option Set all default directories
and then click the button to the right:

November 2018
HRS 10.6 Volume Attributes Guide 5

Then, in the File Selection Dialog, select the folder which contains the tutorial data (note that
yours will probably be different from what we show below. It is important to make note of where
your data is loaded on your computer):

After setting all four paths, the Geoview window will now show the selected. When you have
finished setting all the paths, click Apply to store these paths:

November 2018
HRS 10.6 Volume Attributes Guide 6

Now open the Projects tab and create a new project by clicking New Project:

A dialog opens to set the project name. Call the project Attributes_Guide, as shown below. Enter
the project name and click OK:

Next, a dialog appears asking you the name of the database to use for this project. The database
is used to store all the wells used in this project. By default, Geoview creates a new database,

November 2018
HRS 10.6 Volume Attributes Guide 7

with the same name as the project and located in the same directory. That would be desirable if
we were starting a new project, intending to read in well logs from external files.

For this tutorial, to save time, we have already created a database, which has the well already
loaded. To use that database, click Specify database, and in the menu that opens, click Open:

Select F3_Block.wdb from the Available list, click Select to move it to the Selected list, and
then click OK in the Open Well Log Database dialog:

The previous dialog now shows the selected database and the new project name. Click OK to
accept this.
November 2018
HRS 10.6 Volume Attributes Guide 8

The Geoview window shows you that there is a single well, F03-04. First, click the arrow to the
left of the well symbol before the F03-04 name, to see that there are five curves associated with
the well. Double-click the F03-04 well name to get the well curve display (you will have to
scroll up using the slider on the right to see the display shown below):

Loading the Seismic Data

We have now loaded the wells which will be used in the Volume Attributes process. The next
step is to load the seismic volumes. On the far left side of the Geoview window, click the
Seismic tab:

November 2018
HRS 10.6 Volume Attributes Guide 9

The window to the right of this tab shows all seismic data loaded so far. This is empty. Go to
the bottom of the window (on the left of the workspace) and click the Import Seismic button:

Select From SEG-Y File:

Note that other formats can also be accessed, but the most common data format is SEG-Y.

On the dialog that opens, select F3_Block.sgy. Highlight this file and click Select as shown
below, and click Next at the base of the dialog:

Select the Geometry Type for the survey and click Next:

November 2018
HRS 10.6 Volume Attributes Guide 10

Since we do have Inline and Xline numbers and X and Y coordinates in the headers, click Next.

The Trace Header Specification page opens. By default, this page assumes that the seismic
data is a SEG-Y file with all header values filled in as per the standard SEG-Y convention. In
this case, there is one slight difference and you need to change the byte location of the Inline to
5 and the Xline to 21, as shown. Then, click Next.

November 2018
HRS 10.6 Volume Attributes Guide 11

Now, to scan the input data of the reference volume, click Yes on the Segy File Open dialog
that opens. If Confirmation Required dialog opens, click No to not use the Fast option.

Once the scan has completed, the survey geometry is displayed. Click Ok to load the file.

November 2018
HRS 10.6 Volume Attributes Guide 12

Now the Well Map Table opens, allowing you to map the coordinates of the well to the
coordinates of the seismic data. It can also be opened from the Project menu.

You can edit information loaded from the well database and plot the wells within the seismic
volume. After you are done checking and editing, click OK.

November 2018
HRS 10.6 Volume Attributes Guide 13

The seismic data appears in wiggle trace, variable area (WTVA) format in the Seismic
window. Select the well location from well log icon menu in the upper right corner of the
screen:

This moves the display to Inline 442 and shows the inserted P-wave sonic log curve. Click on
the Zoom In button to expand the scale and scroll down to get the following display:

November 2018
HRS 10.6 Volume Attributes Guide 14

Since we want to compress the data display to show more information on the screen, WTVA is
not the best format, so we will next transform the display to color format.

To do this, right-click anywhere in the seismic data area to show the display properties menu,
and then select Trace Data Volume><none>, as shown below:

Right-click anywhere in the seismic data area to show the display properties menu, and then
select Color Data Volume>F3_Block, as shown:

November 2018
HRS 10.6 Volume Attributes Guide 15

This will display the amplitudes of the seismic data in a normalized red to blue color scale, with
the color bar shown on the right of the screen. Finally, adjust the scale buttons on the top menu
bar to fit the inline within the view window.

The magnifying glass on the left allows you to zoom up a user-defined window. The two
magnifying glasses to the right of this one allow you to zoom the trace and time scales up or
down by a factor of two, and the two magnifying glasses further to the right allow you to zoom
the time scale up or down by a factor of two. Finally, the magnifying glass with the cross
through it on the right lets you undo a magnification.

After adjusting your scales, your final display should look something like this:

November 2018
HRS 10.6 Volume Attributes Guide 16

Note that there is an interesting structure (the Zechstein salt intrusion) at the base of the section,
so let us next create a time slice over this structure. To do this, click the Processes tab under
the Project Manager and select Slice Processing>Create Data Slice:

At the top of the Process Parameters dialog, select F3_Block as Input, select a constant time of
1400 ms for the time slice, and Exactly on target for Window Option:

November 2018
HRS 10.6 Volume Attributes Guide 17

At the base of the dialog, set the Base name as F3_Block_1400, and click OK:

When the slice operation is finished, the map will be displayed in the Map tab.

Change the scale to 150000 as shown. Note that you can fly out, or undock, this display to a
separate window using the airplane button at the bottom right of the screen, which changes to the
word Maps when the map is flown out, indicating that you can dock it again with a click:

November 2018
HRS 10.6 Volume Attributes Guide 18

To avoid having to set the scale every time we display a map, click Tools>Preference Manager
at the top of the map:

Then, fill in the menu by setting Specific Scale to 150000, and finally selecting Save to Global
Parameter Manager and OK:

November 2018
HRS 10.6 Volume Attributes Guide 19

We also can click Set This Scale to All Maps to facilitate the above steps.

Let us next create a shallower slice. Click the Processes tab and select Slice
Processing>Create Data Slice again. At the top of the menu, select F3_Block as the Input,
select a constant time of 700 ms for the time slice, and Exactly on target as the Window
Option:

November 2018
HRS 10.6 Volume Attributes Guide 20

At the base of the menu, set the Base name as the F3_Block_700, and click OK:

When the slice operation is finished, the map will again displayed in the Maps window. As
before, undock this display using the airplane button at the bottom right of the screen:

November 2018
HRS 10.6 Volume Attributes Guide 21

Note that the two time slices we have created cut across structure, as can be seen by the strong
events cutting through each map. In many cases, a stratigraphic (or stratal) slice, which displays
the amplitudes over a picked horizon, or averaged between two horizons, is preferable to a time
slice as it stays on structure and gives you an image of the subsurface as it was when the
sediments were first deposited. But first we will need to load some horizons. Before we do this,
make sure the maps are re-docked in the Maps tab by clicking the Maps button.

Loading Horizons

The last data component required is a set of horizon picks. You can use Geoview to pick the
data directly. Alternatively, you can import horizons which have been previously picked, which
is what we will do here. To start the import process, go back to the Seismic window and select
Horizon>Import Horizons>From File:

November 2018
HRS 10.6 Volume Attributes Guide 22

In the Import Horizon From Files dialog on the Select Horizon Files to Import page, highlight
the Horizons.txt file and click Select. At the lower left corner of the dialog, note that we are
specifying this to be a Free Format file. Click Next:

November 2018
HRS 10.6 Volume Attributes Guide 23

The Basic Horizon Import page then opens. Confirm that F3_Block is used as the reference
geometry volume. The other items in the menu should be filled out correctly because we used
the internal Hampson-Russell format to write out these picks. However, it is always a good idea
to select View Files to make sure that the format has been identified correctly:

November 2018
HRS 10.6 Volume Attributes Guide 24

The ASCII file looks like this, confirming what we see in the previous menu:

Click OK and the Basic Horizon Import menu and the imported horizons will be displayed on
the seismic window. Note that the horizons have all been picked in the top part of the section,
above 1100 ms:

November 2018
HRS 10.6 Volume Attributes Guide 25

Now that we have read in the seismic horizons, there are two types of map displays we can
make, a time structure map (or isochron map between two time structures), or an amplitude map
of the amplitudes over the time structure. Let us start with a time structure map. Select
Horizon>Display Horizon above the seismic display:

On the Display Horizon menu, select the Truncation horizon and click OK:

November 2018
HRS 10.6 Volume Attributes Guide 26

The time structure map over the Truncation horizon appears as shown below. As with our
previous maps, make sure the scale is set to 150000:

Note that the Object Explorer pane is shown to the left of the map. If not, open it by clicking the
arrow button to the right of the eye icon.

November 2018
HRS 10.6 Volume Attributes Guide 27

We will discuss the features the Object Explorer later. Next, we will create an amplitude map
over the Truncation horizon. To do this, click the Processes tab and select Slice
Processing>Create Data Slice. At the top of the Process Parameters dialog, select F3_Block
as Input, select Truncation for Target Event and Exactly on target for Window Option:

At the base of the menu, set Base name as Truncation_F3_Block, and click OK:

November 2018
HRS 10.6 Volume Attributes Guide 28

When the slice operation is finished, the map will again as a map. The scale will automatically
be 150000 and the map appears as below:

Right-click on the color key and select Modify Range. Change Lower value to -5171 and Upper
Value to 534:

November 2018
HRS 10.6 Volume Attributes Guide 29

Compare this slice to the earlier time slice we made at 700 ms, which cut across the Truncation
structure. Note that the stratigraphy is much more evident in this slice, which follows the
structure, than in the previous time slice, which does not.

Another way of creating a slice is to take an average between two horizons. As an example of
this, open the Seismic window. Display Inline 100 by typing that number in the toolbar, and
double the time scale by clicking the Vertical Zoom In button shown at the top of the seismic
display:

November 2018
HRS 10.6 Volume Attributes Guide 30

Notice the large amplitude anomaly between the FS8 and FS7 events. To map this amplitude
over the entire volume, click the Processes tab and select Slice Processing>Create Data
Slice. At the top of the menu, select F3_Block as Input, All data between targets for Window
Option, select FS8 for the Target Event, FS7 for the Secondary Target Event:

Set the Base name to FS8_FS7_RMS_Amp, but do not select OK yet:

Click Show Advanced Options >>, use < Unselect to remove Arithmetic Mean and Select to
add RMS Average. Then click OK:

November 2018
HRS 10.6 Volume Attributes Guide 31

The scale is 150000. Right-click the color bar and select Modify Range. Change Sharing to
Individual, click Default Scan, and click OK:

The map will look similar to this:

November 2018
HRS 10.6 Volume Attributes Guide 32

Note the very strong amplitude anomaly in the southeastern part of the map, which could
indicate a shallow gas sand.

Next we will introduce the concept of seismic attributes.

Introduction to Seismic Attributes


The objective of the Volume Attributes program is to compute seismic attributes over seismic
data volumes, and to display these attributes. By using the Create Data Slice option in
Geoview, Horizon Attributes can also be created from the volume attributes, by averaging the
volume attribute over a slice created over a window around either a constant time or a picked
horizon (or between two picked horizons). Note that in the Hampson-Russell suite of software
there are other programs, such as Emerge and ISMap, which create volume or slice estimates of
some well log parameter from weighted combinations of attributes. Thus, the Volume
Attributes module is a perfect companion to Emerge and ISMap.

But what do we mean by a seismic attribute? Every geophysicist has his or her own definition
but in general the answer is “some linear or nonlinear transformation of the seismic data
volume”. Since multiplication by the number one is a linear transformation, the seismic volume
itself can be considered as a seismic attribute. Indeed, it is the most fundamental attribute. The
Emerge program discussed earlier has an extremely loose definition of attributes, and includes
AVO volumes (such as intercept, gradient, fluid factor, etc.) and inverted volumes (such as P-
impedance, S-impedance, VP/VS ratio, lambda-rho, mu-rho, etc.) as attributes. Although this is a
reasonable definition, in the Hampson-Russell suite of software we have the AVO and STRATA
programs which create these AVO/inversion attributes, so we will not duplicate this capability in
the Volume Attributes module.

Specifically, Volume Attributes contains two types of attributes that are computed from stacked
seismic data:

1. Single trace attributes.


2. Multi-trace attributes.

As the name suggests, single trace attributes are attributes computed from a single trace, such as
instantaneous frequency, phase and amplitude, first and second derivatives, trace integration, and
windowed frequency attributes.

Multi-trace attributes are attributes computed from a group of traces by processes such as cross-
correlation, Fourier transformation and Eigen-decomposition.

Let us start by applying single trace attributes.

November 2018
HRS 10.6 Volume Attributes Guide 33

Single trace attributes


The menu items under Volume Attributes are shown below, choose the Single-trace Seismic
Attributes highlighted as shown below:

Select this option and you will see that Single-trace Seismic Attributes consists of a set of 26
seismic attributes, as shown below in the pull-down menu:

November 2018
HRS 10.6 Volume Attributes Guide 34

These twenty-six attributes can be grouped in the following way:

1. Instantaneous attributes:
Amplitude Envelope
Instantaneous Phase
Instantaneous Frequency
Amplitude Weighted Cosine Phase
Amplitude Weighted Frequency
Amplitude Weighted Phase
Cosine Instantaneous Phase
Apparent Polarity
Quadrature Trace

2. Windowed frequency attributes:


Average Frequency
Dominant Frequency

3. Filter slices:
Filter 5/10-15/20
Filter 15/20-25/30
Filter 2530-35/40
Filter 35/40-45/50
Filter 45/50-55/60
Filter 65/70-75/80

4. Derivative attributes
Derivative
Derivative Instantaneous Amplitude
Second Derivative
Second Derivative Instantaneous Amplitude

5. Integrated attributes
Integrate

November 2018
HRS 10.6 Volume Attributes Guide 35

Integrated Absolute Amplitude

6. Time and space attributes


Time
X-coordinate
Y-coordinate

Instantaneous Attributes

Let’s start with Instantaneous Attributes, which were first described in the classic paper by Taner
et al (Geophysics, June, 1979), in which the term “attributes” was first used. Instantaneous
attributes are computed from the complex trace, C(t), which is composed of the seismic trace,
s(t) and its Hilbert transform, h(t), which is like a 90° phase shifted trace. Writing the complex
trace in polar form, as shown below, gives us the two basic attributes: the amplitude envelope,
A(t) and instantaneous phase, φ(t). (Note that the term instantaneous amplitude is used
synonymously with amplitude envelope.) The mathematics behind instantaneous attributes is
given in Appendix 1, but the following figure shows the intuitive idea behind the two basic
attributes:

Note that if we think of the seismic sample value and its Hilbert transform as a point with its
seismic amplitude as its x coordinate and its Hilbert transform as its y coordinate, the
instantaneous amplitude is the length to that point and the instantaneous phase is the rotation
angle. That is, we have transformed from rectangular to polar coordinates. The third basic
instantaneous attribute is the instantaneous frequency, which is the time derivative of the
instantaneous phase.

Let us start by displaying the amplitude envelope of the input seismic volume. There are
actually two separate ways to do this:

1. We could select Processes>Volume Attributes>Single-trace Seismic


Attributes under the Project Manager, choose Amplitude Envelope from the
pull-down menu, call the output volume Amplitude Envelope and click OK.
This would produce an Amplitude Envelope volume.

November 2018
HRS 10.6 Volume Attributes Guide 36

2. Select Amplitude Envelope as the Colour Attribute from the Seismic View
Parameters menu and plot it in color behind the wiggle-trace plot of the seismic
volume. This will compute the attribute “on-the-fly” for the display.

The first option is useful if you wish to use the attribute volume as an input to another process or
to output it to another software program. However, the second option has the advantage that it
does not write another file to disc, thus saving disc space. We will choose the second option here.

First, go to the Seismic tab and open a blank window next to the seismic display, using the
icon at the bottom of the screen. Then, drag-and-drop the seismic display from View 1 to View
2 to get the duplicate display shown below:

November 2018
HRS 10.6 Volume Attributes Guide 37

Then, click the eye icon and select Modify Attributes for View 2, as shown below:

Under the General option, change the Color Attribute to Amplitude Envelope and click OK:

November 2018
HRS 10.6 Volume Attributes Guide 38

Also, change the color scheme to AVO Envelope by right-clicking on the color bar and choosing
Modify Color Scheme. The result should appear as below. Note the improved visualization of
the high amplitude events:

Next, let us compute a slice between events FS7 and FS8 for the envelope volume. Again, we
will use the “on-the-fly” option. First, select Processes>Slice Processing >Create Data Slice
in the Project Manager.

November 2018
HRS 10.6 Volume Attributes Guide 39

On the resulting menu, select F3_Block as the Input, All data between targets for the Window
Option, select FS8 for the Target Event, FS7 for the Secondary Target Event, and name the
output FS8_FS7_Amp_Env:

At the top of the Advanced tab, choose the Amplitude Envelope for the Transform:

November 2018
HRS 10.6 Volume Attributes Guide 40

At the bottom of the Advanced tab choose Arithmetic Mean for the Averaging Option and
click OK:

The resulting display will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 41

Note the excellent definition of the gas sand channel.

Next, we will look at the Instantaneous Frequency and Phase. To look at the instantaneous
phase, right-click in View 2, showing the amplitude envelope in color, and select View>Seismic
View Parameters and then select Instantaneous Phase for the Color Data Volume and click
OK at the bottom of the menu:

The instantaneous phase plot should appear as below in Window 2, with a color bar that has been
optimized for instantaneous phase:

November 2018
HRS 10.6 Volume Attributes Guide 42

Note that the instantaneous phase plot removes the amplitude information and shows the
stratigraphic detail very clearly.

Next, let us display the instantaneous frequency. Right-click in View 2, showing the
instantaneous phase in color, and select View>Seismic View Parameters and then select
Instantaneous Frequency for the Color Data Volume and click OK:

The instantaneous frequency plot should appear as below in Window 2, with an optimized
frequency color bar:

Averaging the instantaneous phase is not as physically meaningful as averaging amplitude of


frequency, so let us move to the instantaneous frequency attribute.

To create an instantaneous frequency slice, select Processes>Slice Processing>Create Data


Slice under the Project Manager. On the resulting menu, select F3_Block as the Input, All

November 2018
HRS 10.6 Volume Attributes Guide 43

data between targets for the Window Option, select FS8 for the Target Event, FS7 for the
Secondary Target Event, and name the output FS8_FS7_Ins_Frq:

Under the Advanced tab select Instantaneous Frequency for the Transform, and Arithmetic
Mean for the Averaging Option:

November 2018
HRS 10.6 Volume Attributes Guide 44

Click OK, and the final display looks like this, and we see that the gas sand is associated with a
low frequency anomaly:

Let us next compute the dominant frequency attribute of the input seismic volume.

Select Processes>Volume Attributes>Single-trace Seismic Attributes under the Project


Manager. Select F3_Block as the input, type Dominant Frequency as the Output name, and
select Dominant_Frequency as the Attribute. Also, change the Frequency Window Length to
64 ms, the Frequency Step Size to 32 ms and the Frequency Window Taper to 16 ms:

November 2018
HRS 10.6 Volume Attributes Guide 45

Click OK at the bottom of this completed menu and the display will be created. Change the color
scheme to Frequency by right-clicking on the color bar and choosing Modify Color Scheme
and optimize the scale by choosing Modify Range (and clicking Default Scan) to get:

Next, compute an average slice of dominant frequency between events FS7 and FS8 for the
envelope volume. Select Processes>Slice Processing>Create Data Slice under the Project
Manager and select Dominant Frequency as the Input, All data between targets for the
Window Option, select FS8 for the Target Event, FS7 for the Secondary Target Event, and
name the output FS8_FS7_Dom_Frq, as shown here:

November 2018
HRS 10.6 Volume Attributes Guide 46

On the Advanced tab select Raw Amplitude so transform is calculated. Then, click OK. The
resulting map will look like this:

If we compare the instantaneous frequency and dominant frequency slices, the results are similar
but not identical. However, note that the gas sand channel is clearly visible on both as low
frequency content.

Both the instantaneous frequency and the dominant frequency show the actual frequency at the
seismic time. An alternate approach is to filter the seismic volume to a restricted bandpass of
frequenies and look at the seismic amplitudes themselves at this narrow frequency range.

Select Processes>Volume Attributes>Single-trace Seismic Attributes under the Project


Manager and, on the resulting menu, select F3_Block as the input, type Filter_5_10_15_20 for
the Output name, default to the complete volume, and select Filter 5/10-15/20 as the Attribute.

November 2018
HRS 10.6 Volume Attributes Guide 47

Finally, click OK at the bottom of this completed menu and the following display will be created,
showing the input and the output filter side by side:

Note the low frequency ringy nature of this display, as expected. However, also note how the
large amplitude event at the top of the section has been amplified in the low frequency filter
slice, telling us that the large amplitudes are most prominent in the lower end of the spectrum, as
we have seen from the earlier frequency slices.

Next, let us compute the average amplitude of this filtered volume between events FS7 and FS8
for the envelope volume computed from this low frequency filter slice.

As before, select Processes>Slice Processing>Create Data Slice under the Project Manager
and, on the resulting menu, select Filter_5_10_15_20 as the Input, All data between targets
for the Window Option, select FS8 for the Target Event, FS7 for the Secondary Target
Event, and name the output FS8_FS7_5_10_15_20.

November 2018
HRS 10.6 Volume Attributes Guide 48

The completed menu should appear as shown here:

Under the Advanced tab, choose Amplitude Envelope as the transform:

November 2018
HRS 10.6 Volume Attributes Guide 49

Click OK and the average of the amplitude envelope of the 5/10-15/20 Hz filter slice between the
F8 and F7 events will be computed and displayed as shown here:

When compared to the average amplitude envelope of the unfiltered seismic trace, shown earlier,
note that more channel structure is visible on the filter slice and the small amplitude anomaly in
the upper right side of the filter slice is less dominant than on the unfiltered seismic.

Let us finish our discussion of the seismic attributes options by computing the derivative and
integrate attributes. In Appendix 1, we show that the derivative can be thought of as an attribute
which increases the frequency content of a seismic trace and also changes peaks to edge. Using a
single sinusoidal component, it can also be shown that the effect of differentiation is to increase
the high-end frequency content of the signal and rotate it by a 90 degree phase shift, whereas the
effect of integration is to increase the low end frequency content (and thus reduce the high end
frequency content) of the signal and rotate it by a -90 degree phase shift.

November 2018
HRS 10.6 Volume Attributes Guide 50

First we will apply the derivative. Select Processes>Volume Attributes>Single-trace


Seismic Attributes under the Project Manager and, on the resulting menu, select F3_Block as
the input, type Derivative for the Output name, default to the complete volume, and select
Derivative as the Attribute:

The resulting seismic data is plotted in View 2, as shown below. Note the increased frequency
content when compared to the original seismic data in View 1 and also the increased continuity
of the seismic events:

A slice through the derivative volume would look very similar to the original volume since the
frequency increase has been in the vertical time direction.

November 2018
HRS 10.6 Volume Attributes Guide 51

Let us now integrate the input seismic. Select Processes>Volume Attributes>Single-trace


Seismic Attributes under the Project Manager and, on the resulting menu, select F3_Block as
the input, type Integrate for the Output name and select Integrate as the Attribute. To avoid
introducing a low frequency trend into the data, the result is smoothed and the smoothed result is
subtracted from the integrated traces. Noted that the integrate attribute of the seismic trace can be
regarded as a rough bandlimited impedance inversion if ignore the wavelet effect. For more
detailed theory about the bandlimited inversion, please refer to the Strata guide. We will default
the smoother length to 200 ms, as shown below:

Click OK and the result will be shown below. Note the decrease in resolution, as seen on the
large bright spots between Xlines 950 and 1120 on inline 100. To compare this result to the
derivative, first open the third window with the 3 icon at the bottom right:

November 2018
HRS 10.6 Volume Attributes Guide 52

On the resulting display, drag-and-drop the integrate result into window 3 and the derivative
volume from the Project Manager’s seismic list into window 2. The resulting display will look
like this:

When you compare the derivative in the middle window to the seismic data on the left and the
integration on the right, it is obvious that the derivative has “sharpened” the seismic image and
the integration has “smoothed” it. But note that integration has certainly made the large
amplitude events stand out more clearly.

You are free to keep experimenting with the various single trace attributes, especially the ones
not covered in this guide. Note that several of these single trace attributes, average and dominant
frequency, instantaneous frequency and the frequency slices, allowed us to look at the frequency
content of our data and infer properties about the data that were not obvious from the full
spectral amplitude content.

But there are other options in volume attributes that give us more sophisticated ways of
analyzing the frequency composition of our seismic data. In the Volume Attributes program, we
provide a number of these frequency analysis methods in this release and upcoming releases of
the software. In the next section, we will give a short introduction to these methods. .

November 2018
HRS 10.6 Volume Attributes Guide 53

An Introduction to Spectral Estimation

As pointed out by Tary et al. (2014) in their review paper, there are many different approaches to
spectral estimation, and Table 1 from their paper, shown below, lists most of them:

The HRS-10.4 version of the Volume Attributes module currently includes seven methods
shown in Table 1 above. Specifically, these are:

1. The Short Time Fourier Transform (STFT), often called spectral decomposition.
2. The Continuous Wavelet Transform (CWT), which we call simply the Wavelet
transform (WT).
3. S transform (ST), which can be considered as hybrid of short time Fourier transform and
continuous wavelet transform.
4. Basis Pursuit (BP).
5. Empirical Mode Decomposition, or EMD.
6. Ensemble Empirical Mode Decomposition, or EEMD.
7. Complete Ensemble Empirical Mode Decomposition, or CEEMD.

EEMD and CEEMD are based on EMD and grouped together in the Empirical Mode
Decomposition process. From the menu the interpreter can only select EEMD or CEEMD but by
careful choice of parameters it is also possible to run EMD.

Note that Table 1 classifies these methods by whether or not they are parametric, and by their
template. Parametric methods, which include the autoregressive method and the Kalman filter,
reduce the time series to a small set of parameters and are similar to the deconvolution methods
available in the seismic analysis option in Geoview. All of the seven methods described above
are non-parametric. In future releases of the Volume Attributes program, we plan to add
additional Spectral Estimation techniques.

The short time Fourier transform uses the simplest type of template, based on sines and cosines,
and calculates the spectrogram using a fast discrete Fourier transform over a sliding time
window. The wavelet transform utilizes a more advanced template or mother wavelet. The inner
product of the mother wavelet and input seismic trace is computed in order to compare the
spectrum. S transform is halfway between the short time Fourier transform and wavelet
transform. Basis pursuit compares the seismic trace with the pre-defined template/dictionary
through an inversion scheme. Empirical mode decomposition series methods are data-driven

November 2018
HRS 10.6 Volume Attributes Guide 54

methods without the template. They decompose the input seismic trace into different sub-signals
of the mono-component and narrow-band, with the calculation of instantaneous frequency, the
methods can obtain the time-frequency distribution of the input seismic signal. We now will
perform spectral estimation using the short time Fourier transform, wavelet transform, S
transform, basis pursuit and the EEMD/CEEMD algorithms on the F3 Block data and compare
the results.

Short Time Fourier Transform


As the name implies, the Short Time Fourier Transform, or STFT, decomposes the seismic data
into a series of overlapping “short time” windows and performs the Fourier transform on each of
these windows. As shown in Appendix 2, the equation for the STFT,


1
∫ s(t )w(t − τ )e
−iωt
STFT (τ , ω ) = dt ,
2π −∞

is a modified version of the Fourier transform. The results can be analyzed in various ways, to
find the time-variant dominant and average frequency of the data, and to transform the data to
various frequency bands. Thus, this option is similar to the average and dominant frequency
attributes in the single trace attributes and also the filter slices, except that it has more options
and also has an option to compute the attributes over a structural window. This last option is
similar to the spectral decomposition method (Partyka et al., 1999).

November 2018
HRS 10.6 Volume Attributes Guide 55

To initiate the STFT, select Short Time Fourier Transform from the Process>Volume
Attributes>Spectral Decomposition menu.

Select F3_Block as Input, and name the output as STFT, we use the default window parameters
and select the Dominant Frequency Cube and Average Frequency Cube. Select OK to start
the process.

There are different taper options for Taper Type:

November 2018
HRS 10.6 Volume Attributes Guide 56

Display the average frequency and dominant frequency in View 2 and View 3, respectively. The
result should look like the display shown below.

To view the horizon slices of the attributes, select Processes>Slice Processing>Create Data
Slice under the Project Manager and, on the resulting menu, select
STFT_DominantFrequency as the Input, All data between targets for the Window Option,
select FS8 for the Target Event, FS7 for the Secondary Target Event, and name the output
STFT_Dom_Frq_FS7_FS8.

November 2018
HRS 10.6 Volume Attributes Guide 57

The resulting slice is shown below:

Repeat the same procedure for average frequency.

November 2018
HRS 10.6 Volume Attributes Guide 58

The result is similar to the one shown below.

Note let us create constant frequency attributes for the F3_block. Reopen Processes>Volume
attributes>Spectral Decomposition>Short Time Fourier Transform. Input
SIFT_AverageFrequency and use the same previous values for the Short Time Fourier
Transform Input parameters.

Select Constant Frequency Amplitude Cubes and Constant Frequency Phase Cubes. It is
possible to select the frequency range by changing the Start Frequency, End Frequency and
Increment Frequency parameters. Note that the Frequency interval value is automatically
calculated based on the input seismic sample rate and the user-input Window Length value
using the following equation:

Frequency interval = 1 second/ seismic sample rate/ Window Length

November 2018
HRS 10.6 Volume Attributes Guide 59

The Increment Frequency is an integral multiple of Frequency interval. Here we use the
default parameters.

Note that the process creates 4 different frequency cubes, 7.8 Hz, 23.4 Hz, 39.0 Hz and 54.6 Hz,
respectively. The specified frequency value is contained in the output name. Display the constant
frequency amplitude cubes in order of increasing frequency as shown below.

November 2018
HRS 10.6 Volume Attributes Guide 60

After manually modifying the color of different constant frequency volumes into the same range,
we find amplitude variations around the potential gas reservoir place (circled by blue ovals). The
amplitude drops dramatically in the high frequency volume, while it remains relatively constant
in the lower frequency volumes. This can be interpreted as an attenuation artefact useful for
hydrocarbon detection.

The constant phase cubes can also be displayed in a similar fashion as shown below.

November 2018
HRS 10.6 Volume Attributes Guide 61

The Phase spectrum of STFT is usually difficult to interpret when viewed as a vertical profile,
but in this case the fault is more clearly delineated as shown in the 7.8 Hz phase volume. In
general, the phase information of STFT is easier to interpret as a horizon or time slice.

Next, we create a horizon slice between FS7 and FS8 from the 7.8 Hz phase volume.

November 2018
HRS 10.6 Volume Attributes Guide 62

More detailed discontinuities or fault information can be found as shown below

Lastly, we calculate the single event attribute for the specified horizon. Select Single Event
Amplitude Cube and Single Event Phase Cube. Also select the FS8 horizon and click OK.

November 2018
HRS 10.6 Volume Attributes Guide 63

Move the scrollbar to display 0 ms, and select the color data for the output. The single event
attribute describes the frequency variation for the computed time/horizon slice, therefore the
frequency range is from 0 Hz to Nyquist frequency, in this case, the Nyquist frequency of the
F3_block is 125 Hz (View 2). Note that View 2 display in the time domain, but is reflects the
frequency variation, therefore there is no information beyond 125 Hz.

We create a data slice from the single event amplitude attribute for FS8 horizon. Select
STFT_SingleEventAmplitude as input, and name the output as STFT_SF8_62Hz_Amp. Note
that we enter 62 ms in the target event dialog, however it reflects 62 Hz frequency information
for the FS8 horizon.

November 2018
HRS 10.6 Volume Attributes Guide 64

Note that although we calculated the different attributes from the Short Time Fourier Transform
in a series of steps, all the attributes can be computed simultaneously.

November 2018
HRS 10.6 Volume Attributes Guide 65

Wavelet Transform
The wavelet transform (WT) is an alternative technique to decompose a signal into its time
dependent frequency distribution. Unlike the short time Fourier transform which utilizes a fixed
length time window; the wavelet transform uses a variable window size. The wavelet transform
decomposes a signal s (t ) by following equation:


1 t −τ
W ( a ,τ ) = ∫ s(t )ϕ ( )dt ,

a −∞
a

where ϕ * is the complex conjugate of the mother wavelet and τ is the time shift applied to the
mother wavelet, which is also scaled by a (Herrera R.H. et al, 2014). The wavelet transform
compares the input seismic signal with a mother wavelet by taking the inner product of the two.
Because of the logarithmic nature of the mother wavelet, the wavelet transform has denser
frequency sampling at lower frequencies than at higher frequencies. The consequence of this is
that there is higher frequency resolution at low frequencies than at high frequencies due to the
denser frequency sampling. The tutorial of Daubechies (1992) and the book of Mallat (2008)
both describe the wavelet transform in greater detail.

To initiate the wavelet transform, select Wavelet Transform from Process>Volume


Attributes>Spectral Decomposition menu.

November 2018
HRS 10.6 Volume Attributes Guide 66

Select F3 Block as Input, and name the output as WT, use the default wavelet type, select all the
output attribute options, and set 10 Hz for Increment Frequency. The program automatically
calculates the nearest right position frequency. Select OK to start the process.

Display the dominant frequency and average frequency in View 2 and View 3. Compared with
the outputs from short time Fourier transform, the wavelet transform shows better time frequency
resolution in this case. To be fair it is possible to enhance the time resolution of the short time
Fourier transform by using a smaller time window, but at the cost of frequency resolution. The
trade-off between time and frequency resolution is discussed in greater detail in papers by Hall
(2006) and Tary et al. (2014).

November 2018
HRS 10.6 Volume Attributes Guide 67

Next, select Processes>Slice Processing>Create Data Slice under the Project Manager and,
on the resulting menu, select WT_DominantFrequency as the Input, All data between targets
for the Window Option, select FS8 for the Target Event, FS7 for the Secondary Target
Event, and name the output WT_Dom_Frq_FS7_FS8.

The final display should look like this:

November 2018
HRS 10.6 Volume Attributes Guide 68

Repeat the same procedure for average frequency attribute from wavelet transform, and display
the output slice.

November 2018
HRS 10.6 Volume Attributes Guide 69

After then, show the constant frequency amplitude cubes in the frequency increasing order. Set
the color range from 1000 to 30000 to compare the difference of these figures.

Create the slice between FS8 and FS7 from 14.7 Hz constant frequency amplitude cube. Select
WT_ConstantFrequencyAmplitude_14_7 Hz as the Input, All data between targets for the

November 2018
HRS 10.6 Volume Attributes Guide 70

Window Option, select FS8 for the Target Event, FS7 for the Secondary Target Event, and
name the output WT_20Hz_Amp_FS7_FS8.

The final display should look like this:

November 2018
HRS 10.6 Volume Attributes Guide 71

If you were to display the constant frequency phase cubes as vertical profiles they would look
similar to the display below. Like the short time Fourier transform, the fault information can be
more clearly visualized in the lower frequency phase profiles.

November 2018
HRS 10.6 Volume Attributes Guide 72

S Transform
The S transform (ST) can be considered as a hybrid of short time Fourier transform and the
continuous wavelet transform. Its template is sines and cosines, like short time Fourier transform,
but the analyzed signal is manipulated by a Gaussian taper. This Gaussian taper is frequency-
dependent and is defied by

−t 2 f 2

w(t ) = e 2k 2

The parameter k, by controlling the width of the Gaussian taper, can also be tuned to obtain
better frequency resolution at the expense of reduced time resolution, vice versa. Combine with
( t −τ )2 f 2
the Fourier template, the S transform becomes the inner product < s (t ) * e 2k 2
, ei 2π ft > , yielding

∞ − ( t −τ )2 f 2
f
S ST (τ , f ) =
k 2π ∫ s(t )e
−∞
2k 2
e − i 2π ft dt

The constant before the integral helps normalize the Gaussian taper function.

Like the continuous wavelet transform, the time frequency distribution of the S transform is
variable. On the other hand, the S transform keeps the properties of the Fourier transform such as
a uniform frequency sampling, spectral amplitudes independent of frequency owing to the
normalization of the Gaussian taper, and retains the original signal phase. The S transform is
halfway between the short time Fourier transform and continuous wavelet transform.

To initiate the S transform, select S Transform from the Process>Volume


Attributes>Spectral Decomposition menu.

November 2018
HRS 10.6 Volume Attributes Guide 73

Select F3 Block as Input, and name the output as ST, use the default Gaussian window width
factor and select all the output attributes. Set 10 Hz for Increment Frequency, the program
automatically calculate the nearest right position frequency. Select OK to start the process.

Display the dominant frequency in View 2.

November 2018
HRS 10.6 Volume Attributes Guide 74

Next, select Processes>Slice Processing>Create Data Slice under the Project Manager and,
on the resulting menu, select ST_DominantFrequency as the Input, All data between targets
for the Window Option, select FS8 for the Target Event, FS7 for the Secondary Target
Event, and name the output ST_Dom_Frq_FS7_FS8.

The final display should look like this:

November 2018
HRS 10.6 Volume Attributes Guide 75

Display the constant frequency amplitude cubes as vertical profiles they would look similar to
the display below. The constant frequency phase cube from S transform is usually hard to
interpret.

The above three methods are classical or conventional spectral decomposition methods. These
methods show the robust feature and light computation cost, and they have been successfully
utilized in thin bed analysis (Partyka et al., 1999), hydrocarbon detection subsurface, geological

November 2018
HRS 10.6 Volume Attributes Guide 76

structure detection (Liu et al., 2011), stratigraphic delineation and attenuation estimation (Reine et
al., 2012).

The following two spectral decomposition techniques, basis pursuit and empirical mode
decomposition, emerged during last 15 years, and they are both high time-frequency resolution
techniques.

Basis pursuit
The classical seismic model is the convolution of a single wavelet with a
reflectivity series. Portniaguine and Castagna (2004) pointed out that to simplify the spectral
decomposition problem, a seismic trace s(t) can be represented as the convolution of
a family of wavelets ψ(t,n) and their associated reflection coefficients a(t,n) as

N
s(t) = ∑ [ψ (t, n) *a(t, n)]
n =1

where N is the number of wavelets, t is time and n is the dilation of the wavelet
determining its frequency (Tary et al., 2014). Using matrix notation, the equation can
be rewritten as

 a1 
 
a
s = (ψ 1 ψ 2 ... ψ N )  2  + η = Da + η
 
 
 aN 

where ψN denotes the convolution matrix of ψ(t,n), D is the wavelet dictionary, and
η is the random noise. At this stage, a can be intrepretated as the time-frequency
dependent reflectivity, which corresponds to the time frequency distribution of the
seismic trace s.

The above equation is an under-determined linear equation, the solution of which can
be calculated through the least squares technique. Since high time frequenncy
resolution is preferred, the sparsity of a is calculated using the L1 norm. Therefore
the cost function is

1 2
J = s − Da 2 + λ a 1
2

November 2018
HRS 10.6 Volume Attributes Guide 77

The first term in the cost function J represents the data misfit term calculated
using the L2 norm, which is the least squares error between the obseved and
predictied data. The second term in the cost function mesures the number of
nonzero coefficients in a. The term λ is the trade-off parameter controlling the
relative strength between the two terms.

To initiate basis pursuit select Basis Pursuit from the Process>Volume Attributes>Spectral
Decomposition menu. The following menu appears.

Make sure the input is the F3_Block volume, use a time window from 400 to 800 ms, call the
output BP. Select Complex Ricker Wavelet as Wavelet Type, set Regularization parameter
as 1.5, and set Maximum number of iterations as 20. Regularization parameter is the trade-off
parameter λ in the above cost function J. Larger regularization parameter, sparser
time-frequency outputs. Note that the program takes about 10 minutes.

November 2018
HRS 10.6 Volume Attributes Guide 78

You may have to change the color bar colors and range. To do this, right-click the color bar to
get the following menu and select Modify Range. Then, fill in the Edit Color Key Range as
follows to turn off the Normalized color key, set Individual as the Sharing option and set
Lower value as 0 and Upper value as 70. Then click OK. The final display looks like this:

Next, select Processes>Slice Processing>Create Data Slice under the Project Manager and,
on the resulting menu, select BP_DominantFrequency as the Input, All data between targets
for the Window Option, select FS8 for the Target Event, FS7 for the Secondary Target
Event, and name the output BP_Dom_Frq_FS7_FS8.

November 2018
HRS 10.6 Volume Attributes Guide 79

The final display should look like this:

Display different frequency amplitude cubes side by side to compare:

November 2018
HRS 10.6 Volume Attributes Guide 80

As well as the different frequency phase cubes:

Since utilize the complex Ricker wavelet as wavelet type in basis pursuit, same as wavelet
transform. The output from basis pursuit should not have huge difference from wavelet
transform. But what’s the benefit from basis pursuit?

Let’s drag-and-drop BP_ComstantFrequencyAmplitude_9_8Hz in View 1 and


WT_ComstantFrequencyAmplitude_9_8Hz in View 2. The final display looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 81

Basis pursuit shows less spectral smearing compared with the wavelet transform, therefore
separates the two components in this case (highlighted by blue circles). The spectral smearing is
reduced effectively due to the sparsity constrain during the inversion.

Next, select Processes>Slice Processing>Create Data Slice under the Project Manager and
select BP_ComstantFrequencyAmplitude_14_7Hz as the Input, All data between targets for
the Window Option, select FS8 for the Target Event, FS7 for the Secondary Target Event,
and name the output FS8_FS7_BP_9_8Hz.

November 2018
HRS 10.6 Volume Attributes Guide 82

The result looks like this:

Empirical Mode Decomposition


The last spectral estimation technique we discuss in this guide is empirical mode decomposition,
or EMD. As discussed in the Appendix 2, there are several more advanced versions of EMD:
ensemble empirical mode decomposition, or EEMD, and complete ensemble empirical mode
decomposition, or CEEMD. Note, the user can only select EEMD or CEEMD from the
parameter selection menu but by careful choice of parameters the user can run EMD. This
involves selecting the Noise Level to 0 and Realization Number to 1. This is not recommended
as EMD usually produces suboptimal result(s) as compared to EEMD or CEEMD. Appendix 2
also gives an introduction to the theory of frequency attributes and describes other types of
frequency attributes. Note that HRS 10.3 updates the user interface of empirical mode
decomposition to make consistency with the other spectral decomposition methods.

To initiate empirical mode decomposition or its variations select Empirical Mode


Decomposition from the Process>Volume Attributes>Spectral Decomposition menu. The
following menu appears. Fill it out as shown:

November 2018
HRS 10.6 Volume Attributes Guide 83

Make sure the input is the F3_Block volume, use a time window from 400 to 800 ms, call the
output EEMD and use the EEMD algorithm (the default) to compute both the Peak Frequency
Attribute and also Constant Frequency Attributes. Let’s use the default parameters.

The Peak frequency attribute is synonymous to the dominant frequency attribute and refers to the
frequency which has the maximum energy at each time sample (Han and van der Baan, 2013).
The reason we select a smaller time window instead of the whole trace is that empirical mode
decomposition takes more computational resources than short time Fourier, wavelet and S
transforms. Select default values for the Noise Level, Realization Number and Gaussian Filter
Length (Half) options as these are advanced features that are rarely used and have been set to
reasonable values. More information about the input parameters can be found in Appendix 2.
You are now ready to run the process. To run the process, click OK to calculate these attributes,
all the attributes are computed simultaneously. Note that the program takes about 10 minutes.

You may have to change the color bar colors and range. To do this, right-click over the color bar
to get the following menu and select Modify Range… Then, fill in the Edit Color Key Range as
follows to turn off the Normalized color key, set Individual as the Sharing option and click
Default Scan to find the amplitude range. Then click OK.

November 2018
HRS 10.6 Volume Attributes Guide 84

The menus should look like this:

Again, open the color key menu and select Modify Color Scheme. Then, select the Frequency
color key.

The final display should look like this:

It is interesting to note that there exists a low frequency anomaly below the shallow gas anomaly.
This should show up on the horizon slides between the FS8 and FS7 markers.

To see this, create a slice of Peak Frequency, select Processes>Slice Processing>Create Data
Slice under the Project Manager and select EEMD_PeakFrequency as the Input, All data
between targets for the Window Option, select FS8 for the Target Event, FS7 for the
Secondary Target Event, and name the output FS8_FS7_PeakFreq.

November 2018
HRS 10.6 Volume Attributes Guide 85

The completed menu is as shown below:

Under the Advanced tab, select Raw Amplitude, as shown below and the click OK:

November 2018
HRS 10.6 Volume Attributes Guide 86

After optimizing the color scale and setting the frequency color bar as discussed earlier, the map
should look like this:

As we saw with both the instantaneous frequency and dominant frequency slices, the channel
shows up as a low frequency event.

Next, let us look at the three single frequency volumes computed by EEMD. Go back to the
Seismic View and open up a third window using the icon at the bottom of the screen:

November 2018
HRS 10.6 Volume Attributes Guide 87

Then, select Project Data>Seismic and drag-and-drop the 4.9 Hz, 24.5 Hz and 63.7 Hz volumes
into the three separate windows, one at a time:

You can then modify the range of each volume independently, but make sure that the color bar is
shared by all three volumes, as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 88

Here is the final display, with the color scheme optimized for these three volumes:

Note that each frequency volume highlights a different part of the channel sequence.

To create average slices between the FS8 and FS7 horizons for these three constant frequency
slices, you simply need to select the Processes>Slice Processing>Create Data Slice menu
and change the Input and Base Name, since the menu recalls the other parameter values:

The completed menu will appear as follows:

November 2018
HRS 10.6 Volume Attributes Guide 89

Under the Advanced tab, select Amplitude Envelope:

When you have completed the processing, the slices from three volumes will appear:

Low frequency slice Mid-frequency slice

High frequency slice

Note that each frequency slice defines a different channel structure.

November 2018
HRS 10.6 Volume Attributes Guide 90

Complete Ensemble Empirical Mode Decomposition


For comparison purposes we next run complete ensemble empirical mode decomposition, or
CEEMD. In this test we only compute the peak frequency attribute. Fill the menu out as
follows, using a time window of 400 to 800 ms, F3_Block as the input and CEEMD as the
output, and outputting the Peak Frequency Attribute. Note that CEEMD usually takes longer to
run than EEMD.

Then, click OK. When the calculation is done, we can compare EEMD Peak Frequency with
CEEMD Peak Frequency, as shown below for Inline 100 with the same colour scheme:

Note that there are only slight differences between the two results.

November 2018
HRS 10.6 Volume Attributes Guide 91

Next, we compute an arithmetic average between events FS8 and FS7, as done before for the
EEMD result. The result looks like this:

Note how similar this result is to the EEMD result, which is re-plotted below:

November 2018
HRS 10.6 Volume Attributes Guide 92

The short time Fourier transform, wavelet transform and S transform are well-known and well-
proven time frequency analysis tools. They decompose the seismic signal with their pre-defined
basis. Their main shortcomings are spectral smearing and the trade-off between time and
frequency localization. Despite this they remain popular because they produce smooth and robust
features as shown in the above examples.

Empirical mode decomposition series methods, like EEMD and CEEMD, show higher time-
frequency resolution than the short time Fourier transform, wavelet transform and S transform,
providing more frequency variation in the above examples. However, they are sensitive to the
signal-to-noise ratio (Tary et al. 2014), and computationally expensive. Basis pursuit works well
even in the low signal-to-noise case (Tary et al. 2014), but it is also computationally expensive.

Our recommendation is to analyze the principal frequency variations by short time Fourier
transform, wavelet transform or S transform first, followed by identification of the subtle
changes in geology using the basis pursuit, EEMD and CEEMD in a smaller time window.

Advanced 3D Visualization
In the next section, we will use the Advanced 3D Visualization plugin to perform color blending
on these three volumes.

RGB Color Blending of the Frequency Slices

We will now use the Advanced 3D Visualization plugin in Geoview to perform RGB color
blending of the three EEMD frequency volumes. (Note that this option is available for Hampson-
Russell Software for a licence fee). If you have access to this option, initialize it by selecting
Advanced 3D Visualization from the Plugin option under Processes:

November 2018
HRS 10.6 Volume Attributes Guide 93

The blank Visualization window will then appear, with the Seismic Volume Select option in the
upper left hand part of the window:

For the Volume, select EEMD_ConstantFrequencyAmplitude_4_9Hz and choose Seismic for


the Data. Click Update. You will be prompted to convert the SEG-Y volume to a LDM volume
to reduce memory requirements; click Yes.

November 2018
HRS 10.6 Volume Attributes Guide 94

After the volume has been converted, select Add Seismic Volume from the Seismic Volumes
Selection menu to the left of the 3D volume as shown below:

Click the Add Seismic Volume button at the bottom of the menu and add the
EEMD_ConstantFrequencyAmplitude_24_5Hz volume and then the

EEMD_ConstantFrequencyAmplitude_63_7Hz volume, as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 95

Note that the data will also be displayed in the visualization panel:

Next, select the RGB Blending tab at the lower left hand side of the window (next to the Project
Manager). This opens the blank menu shown below, which will allow us to select the attributes
for the red, green and blue channels:

November 2018
HRS 10.6 Volume Attributes Guide 96

Fill out the menu by selecting these volumes, as shown below, and click Apply:

We can now display any combination of these volumes. The key menu for controlling which
volumes get displayed is View Data, which appears in the upper right hand part of the screen.

Note that the RGB Volume Data and the Volume Data that contains the individual volumes are
in the data list. Make sure that both RGB Volume and Volume have been expanded. Uncheck
the RGB Volume, and also uncheck the EEMD_ConstantFrequencyAmplitude_24_5Hz (LDM)
and EEMD_ConstantFrequencyAmplitude_63_7Hz (LDM) volumes to see the display of the

November 2018
HRS 10.6 Volume Attributes Guide 97

EEMD_ConstantFrequencyAmplitude_4_9Hz (LDM) volume. Double-click Seismic (from


EEMD_ConstantFrequencyAmplitude_4_9Hz) to open the View Parameters menu. The View
Parameters menu allows you to change the appearance of the volume. Scroll the Domain
scrollbar to 560 ms.

November 2018
HRS 10.6 Volume Attributes Guide 98

Next, uncheck the EEMD_ConstantFrequencyAmplitude_4_9Hz (LDM) and


EEMD_ConstantFrequencyAmplitude_63_7Hz (LDM) volumes, and select the checkbox of the
EEMD_ConstantFrequencyAmplitude_24_5Hz (LDM) volume to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 99

Finally, uncheck the EEMD_ConstantFrequencyAmplitude_4_9Hz (LDM) and


EEMD_ConstantFrequencyAmplitude_24_5Hz (LDM) volumes, and select the checkbox of the
EEMD_ConstantFrequencyAmplitude_63_7Hz (LDM) volume to see this display:

Note that each of the frequency volumes define the channel differently. A display of all three
slices simultaneously will emphasize all of the subtle channel features. This involves inputting
each of the frequency slices as a red, green or blue channel and mixing the colors.

November 2018
HRS 10.6 Volume Attributes Guide 100

To do this, uncheck the EEMD_ConstantFrequencyAmplitude_4_9Hz (LDM),


EEMD_ConstantFrequencyAmplitude_24_5Hz (LDM) and
EEMD_ConstantFrequencyAmplitude_63_7Hz (LDM) volumes and select the checkbox for
the RGB Volume data. Also select that volume under View Parameters. Note that the default,
shown below, puts 4.9 Hz in the red channel, 24.5 Hz in the green channel and 63.7 Hz in the
blue channel. Turn off the Inline and Crossline checkbox and Scroll the Domain scrollbar to 600
ms.

Note that there are six different ways to combine the three frequency volumes with three colors.
The figures below show all six different ways of creating the display, which can be done by
changing the order of the inputs under View Parameters:

November 2018
HRS 10.6 Volume Attributes Guide 101

R = 4.9 Hz, G = 24.5 Hz, B = 63.7 Hz. R = 4.9 Hz, G = 63.7 Hz, B = 24.5 Hz.

R = 24.5 Hz, G = 4.9 Hz, B = 63.7 Hz. R = 24.5 Hz, G = 63.7 Hz, B = 4.9 Hz.

R = 63.7 Hz, G = 24.5 Hz, B = 4.9 Hz. R = 63.7 Hz, G = 4.9 Hz, B = 24.5 Hz.

Obviously, there is no one best way to combine the volumes, since each combination highlights
different features.

November 2018
HRS 10.6 Volume Attributes Guide 102

Next, let’s visualize the whole cube. Click the Hand Icon and rotate the RGB volume to a
convenient view. Also, change Z scaling in the General option of View Parameters to 10, as
shown below:

Next, click the Arrow icon and move the slices as shown below. If you go to the RGB volume
parameters in the View Parameters menu, you can change the Inline to 250, Crossline to 724
and Domain to 600, as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 103

At this stage, the gas anomalies shows the bright spot feature much more clear in RGB blending
view.

Next, uncheck the RGB Volume Data and click the


EEMD_ConstantFrequencyAmplitude_4_9Hz (LDM) volume. Again, in the View Parameters
menu, you can change the Inline to 250, Crossline to 724 and Domain to 600, as shown below:

Note that we have highlighted the major channel by the yellow arrow that was visible on the 4.9
Hz Volume. Also, the gas anomalies contain very strong energy at this low frequency amplitude
volume.

Continue to explore the datasets using 3D visualization. More information about this advanced
tool can be found in Advanced 3D Viewer Guide.

November 2018
HRS 10.6 Volume Attributes Guide 104

Multi-trace Filters and QC


Before discussing multi-trace attributes we discuss multi-trace filters and methods to quality
control the seismic prior to running the attributes. It is important that the input seismic volumes
to multi-trace attributes such as curvature have a good signal-to-noise ratio. Multi-trace filters
can help improve the seismic prior to running the attributes.

Edge Preserving Filters

The first set of spatial filters we discuss are Edge Preserving Filters, which as the name implies
preserve the position of sharp edges. These may be applied to the seismic volume to pre-
condition the data prior to computing the multi-trace attributes. The Edge Preserving Filters
are accessed from the processing menu under Multi-trace Filters and the Volume Attributes list

There are eight different types of filters included in the Edge Preserving Filters option:

As discussed in Appendix 3, the simplest filters to understand are the mean, median and mode
filters, in which a rectangular window of n inline by m cross-line traces are collected around a
central trace. Then the mean, median or mode of the n*m samples replaces the central sample at
each time.

November 2018
HRS 10.6 Volume Attributes Guide 105

A slightly more advanced option is the alpha-trimmed mean, in which the samples are arranged
from smallest to largest, a given percentage of the two tails are rejected, and the mean is taken of
the remainder.

To implement this option select Volume Attributes>Edge Preserving Filters, and fill out the
menu as shown here, selecting F3_Block as the input, calling the output ATM, selecting Alpha
Trimmed Mean as the filter, and using the default parameters ( Inline and Xline filter length of 3
and Alpha Trim % = 0.5):

Click OK and the resulting filtered volume will be displayed in View 2:

November 2018
HRS 10.6 Volume Attributes Guide 106

Note the improvement in the event continuity, although the original processing was also very
good.

Next, we will create an amplitude map over the Truncation horizon for the ATM output. To do
this, click the Processes tab and select Slice Processing>Create Data Slice. At the top of the
menu, select ATM as the Input, select Truncation for the Target Event, Exactly on target for
the Window Option and call the output Truncation_ATM:

Also make sure that under the Advanced options, Raw Amplitude is set for the transform
option and Arithmetic Mean for the averaging option. Click OK, and the output looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 107

Note that there is very little change when compared to the Truncation slice of the original F3
Block dataset, suggesting that the original processing using the Insight Footprint removal was
optimal.

Another filter option is the multistage median-based modified trimmed-mean, which is described
more fully in Al-Dossary and Marfurt (2003). Multistage median-based modified trimmed-mean
was originally developed for image processing with the objective of filtering the data while
preserving the edges as well as thin and linear features. Al-Dossary and Marfurt (2003) extended
this algorithm to seismic and found that preconditioning the seismic with this filter resulted in
attribute images with more detailed information.

To try this option select Volume Attributes>Edge Preserving Filters, and fill out the menu as
shown here, selecting F3_Block as the input, calling the output MSMTM, selecting Multistage
median-based modified trimmed-mean as the filter, and using the default parameters, as
shown here:

This option takes considerable more processing resources than the previous option we tried. The
Figure below shows the results of the multistage median-based modified trimmed-mean,
compared with both the input and the alpha-trim mean. Note that the alpha-trim mean appears to
be slightly better than the other two results. We will therefore use it for the following attribute
analyses.

November 2018
HRS 10.6 Volume Attributes Guide 108

3D Gaussian Smoothing Filter

Another type of filter used to pre-condition the seismic data prior to computing volume attributes
is the 3D Gaussian Smoothing Filter option in the Multi-trace Filters under Volume
Attributes list, as shown below:

The Gaussian smoother involves running a three-dimensional Gaussian operator over the seismic
volume with standard deviations in each direction controlled by the user. The filter is
implemented using a recursive algorithm, which makes its computation highly efficient. 3D
Gaussian smoothing filter is a low-pass filter, and attenuates high frequency noise. Therefore, 3D
Gaussian smoothing filter may be applied after the processing steps which amplify the high
frequency noise, like deconvolution.

When the 3D Gaussian Smoothing Filter option appears, select F3_Block as the input and
name the output GaussSmooth. Default the window parameters as shown below (note that this
involves a moving window of 3 inlines by 3 crosslines and a standard deviation of 1 sample in
both the inline and crossline directions).

Then, click OK at the bottom of the menu:

November 2018
HRS 10.6 Volume Attributes Guide 109

The resulting display will appear as shown on the next page.

Next, we will take a slice through the Gaussian filter volume. At the top of the menu, select
GaussSmooth as the Input, select Truncation for the Target Event, Exactly on target for the
Window Option and call the output Gauss_Smooth_Truncation as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 110

Click OK and, after changing the color scale to Grey Scale (Inverted) the extracted map will look
like this:

Note that with the default parameters, the results on both the vertical section and the map appear
almost too smooth. However, for some applications, like the instantaneous curvature option we
will use later, this type of smoothing may be necessary.

Anisotropic Diffusion Filter

Another type of pre-conditioning filter that can be applied to your data before computing volume
attributes is the Anisotropic Diffusion Filter option in the Multi-trace Filters under Volume
Attributes list, as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 111

The detailed theory of Anisotropic Diffusion Filter is discussed in Appendix 4, noted that
Fehmers and Höcker (2002) also call this process as structure oriented filter. Anisotropic
Diffusion Filter removes the noise by applying a smoothing operation parallel to the seismic
reflections, therefore emphasize the main geological and structural features of the data. Turn on
the edge detecting option, the algorithm would detect and keep the fault information in the 3D
seismic data.

When the Anisotropic Diffusion Filter option appears, select F3_Block as the input and use the
default output name DiffusionFilter. We also use the default parameters in the program, more
detailed information about the parameters can be found in Appendix 4.

Then, click OK at the bottom of the menu:

November 2018
HRS 10.6 Volume Attributes Guide 112

The resulting display (only show color data) will appear as shown on the next page. Anisotropic
Diffusion Filter is edge preserving filter, smooths out the noise and keeps the fault structure nicely.

Next, we take a horizon slice through the anisotropic diffusion filter volume. To do this, click the
Processes tab and select Slice Processing>Create Data Slice. At the top of the menu, select
DiffusionFilter as the Input, select Truncation for the Target Event, Exactly on target for the
Window Option and call the output DiffusionFilter_Truncation as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 113

Click OK and, after changing the color scale to Grey Scale (Inverted) the extracted map will
look like this:

Bandwidth attribute

The basic requirement for reservoir characterization is that the data are processed in a way which
ensures that the underlying propagating wavelet has near-constant properties (amplitude, phase
and frequency). To monitor the wavelet characteristics, seismic bandwidth is a key parameter of
seismic quality and interpretability (Araman and Paternoster, 2014).

The bandwidth index attribute measures the bandwidth of the wavelet from the autocorrelation of
the seismic which in turn approximates the autocorrelation of the wavelet. The attribute
calculates two ratios which are subsequently analyzed. The first ratio (ΔT1/ΔT0) is the time
distance ΔT1 between the main peak and the first trough of the autocorrelation function; and
time distance ΔT0 between the main peak and the first zero-crossing ratio. The second ratio is
the peak-to-trough magnitude ratio of the autocorrelation function (A1/A0) (see figure below).
This measurement is independent of absolute amplitudes and frequency values and is
characteristic of the shape of the signal. Broadband signals are located in the lower-left corner of
the (ΔT1/ΔT0; A1/A0) cross-plot, while narrowband signals are located in the upper-right corner
of this space (Araman et al, 2012).

November 2018
HRS 10.6 Volume Attributes Guide 114

To start the bandwidth attribute, select Bandwidth Attribute from the Volume Attributes list
under Processes:

This brings up the menu shown below. Select F3_Block for the Input, and use default output
name Bandwidth. Set start time as 300 ms and end time as 1100 ms for the auto-correlation
window. Select OK to run the process.

November 2018
HRS 10.6 Volume Attributes Guide 115

When finished, the result should look like the display below:

The process outputs 6 strips (ignore the information from 300ms to 750ms) which correspond to
A0, A1, T0, T1, A1/A0 ratio and ΔT1/ΔT0 ratio from the top to the bottom. As we are interested
to cross-plot ΔT1/ΔT0 and A1/A0, therefore we need to create the time slice for 5th strip (908 ms
in this case) and 6th strip (950 ms in this case).

November 2018
HRS 10.6 Volume Attributes Guide 116

From the Processes tab select Slice Processing>Create Data Slice. At the top of the menu,
select Bandwidth as the Input, enter 908 ms for the Target Event, Exactly on target for the
Window Option and call the output A0A1_Ratio, as shown below.

Then, select OK. The output result is shown here:

November 2018
HRS 10.6 Volume Attributes Guide 117

We then repeat the process extracting the T0/T1 ratio by extracting a horizon slice at Constant
950 ms and outputting the result to T0T1_Ratio as shown below.

The resulting T0/T1 ratio should look similar to the display below:

November 2018
HRS 10.6 Volume Attributes Guide 118

To crossplot the final result, go to Processes>Cross Plotting>Cross Plot Data Slices.

Fill in the menu as shown below and click OK.

November 2018
HRS 10.6 Volume Attributes Guide 119

The crossplot should look like the one shown below:

To see the results more clearly, right-click and select Plot option. Set x min to 1 and x max to 2,
and set y min to 0 and y max to 1.

November 2018
HRS 10.6 Volume Attributes Guide 120

The current display is deceptive as it is difficult to see how many points are in any particular bin.
Let us change the plot to use a histogram to address this. Right-click the crossplot and select
Show histogram and then Histogram map as shown below.

The final A0A1 ratio vs T0T1 ratio is shown below.

November 2018
HRS 10.6 Volume Attributes Guide 121

Multi-trace Attributes – An introduction


Next, we discuss multi-trace attributes, which in general are computed using a moving window
in three dimensions: in-line, cross-line and time. Note, in many cases, the time window includes
the whole trace.

Energy Ratio and Coherence

Next, we will turn to the Energy Ratio Attribute option in the Volume Attributes list:

The theory of the Energy Ratio Attribute and other similar attributes is given in Appendix 5. In
that appendix it is pointed out that if we think of each seismic trace in a seismic volume as being
a signal which correlates with other signals then we can calculate the degree of correlation
among the traces. This may be done by creating a correlation, or covariance, matrix between all
the possible trace-to-trace combinations. This idea was first presented by Kramer and Mathews
(1956) in the context of electrical signals, and was extended to seismic signals for the purpose of
signal-to-noise enhancement by Jones and Levy (1987).

Next, the information in the seismic covariance matrix was used to look for seismic
discontinuities, and was called coherency. The first coherency method involved finding the
maximum correlation coefficients between adjacent traces in the x and y directions, and taking
their harmonic average (Bahorich and Farmer, 1995). Its mathematical form is given in
Appendix 5. Marfurt et al. (1998) extended this by computing the semblance of all combinations
of all the traces in a rectangular window around a central trace for which the semblance was to
be computed. This can be done by cross-correlating all of the traces in an m in-line by n cross-
line by k time sample moving window throughout the seismic volume, which creates a cross-
correlation volume centered at each sample of the seismic volume. This also involves searching
over all dips in the inline and cross-line directions, discussed in Appendix 5.

Gersztenkorn and Marfurt (1999) created the third-generation coherency algorithm by computing
the maximum eigenvalue of each cross-correlation volume and normalizing this value by the
sum of eigenvalues of the cross-correlation volume. Their algorithm is referred to as the eigen-
structure coherency algorithm. As shown in Appendix 5, the Energy Ratio Attribute is

November 2018
HRS 10.6 Volume Attributes Guide 122

mathematically equivalent to the eigenstructure coherency algorithm, but is based on the concept
that was published by Jones and Levy (1987) before the paper on eigenstructure coherence.
Essentially, the Jones and Levy (1987) approach uses principal components rather than
eigenvalues, but can be shown to be identical to the eigenstructure method. Again, a more
mathematical description of this is given in Appendix 5.

There are also several new features in the Energy Ratio Attribute that were not found in the
original eigenstructure coherency. These new options are the ability to simultaneously select
multiple eigenvalues (or principal components) and also to compute different types of averages.

To implement the Energy Ratio Attribute option select Volume Attributes>Energy Ratio
Attribute, and fill out the menu as shown below, selecting F3_Block as the input, calling the
output EnergyRatio, and changing the number of energy ratio coherence volumes to 2. That is,
we will output two volumes: the first eigenvalue or principal component volume and the second
eigenvalue or principal component volume.

Default the window parameters as shown below (note that this involves a moving window of 3
inlines by 5 crosslines by 5 time samples) but turn off the dip scanning changing the search over
inline and cross-line dips to between 0 and 0 with a single dip search). This will increase the
speed of the computation. Later, we will perform an optimal dip search and compare the results
to these results.

Then, click OK at the bottom of the menu:

November 2018
HRS 10.6 Volume Attributes Guide 123

When the process is finished two volumes will have been created: Energy_Ratio_1 maximum,
which is the result of using the first principal component, and Energy_Ratio_2 maximum,
which is the result of using the second principal component.

The default display is Energy_Ratio_1 maximum, but we will first have to optimize the display.

Right-click the color bar to bring up the following menu and first select Modify Range:

Click Default Scan, set Sharing: to Individual and click OK:

Then, right-click the color bar to bring up the following menu and select Modify Color
Scheme:

Select Grey Scale (Inverted) as the color scheme:

November 2018
HRS 10.6 Volume Attributes Guide 124

You will note that the display looks too dark, so right-click the color bar one final time and
select Reverse Colors:

If you move to line 100, the display should now look like this, with the faults clearly shown on
the energy ratio volume:

November 2018
HRS 10.6 Volume Attributes Guide 125

Next, let us display the Energy_Ratio_2 maximum volume. To do this, right-click the
Energy_Ratio_1 maximum volume and change the Color Data Volume to Energy_Ratio_2
maximum as shown below:

Again, optimize the amplitude range and reverse the color bar (since the second Principal
Component is close to reverse polarity of the first). On Inline 100, the result should look like:

November 2018
HRS 10.6 Volume Attributes Guide 126

Next, we will compare Principal Components 1 and 2 (for short: PC 1 and PC 2). In the initial
Energy Ratio results window we have been looking at, both volumes are loaded. However, if
you look under Project Data>Seismic, note that both EnergyRatio_1_maximum and
EnergyRatio_1_maximum are independently saved. Open a third window by clicking on the 3
icon:

Then, drag-and-drop Energy_Ratio_1 maximum into View 1 using the default, which is to
Replace data in View :

Next, drag-and-drop Energy_Ratio_2 maximum into View 2 using the defaults on the menu.

Finally, drag-and-drop Energy_Ratio_1 maximum and Energy_Ratio_2 maximum into View


2 using Don’t close existing data for the Energy_Ratio_2 maximum volume.

When the data is displayed, right-click View 3 and change the Color Data Volume to
Energy_Ratio_1 maximum-Energy_Ratio_2 maximum as shown here:

November 2018
HRS 10.6 Volume Attributes Guide 127

After optimizing the color bar, the display should now look like this:

Note that by taking the difference between PC 1 and PC 2, we have made the faults appear a
little more clearly.

Next, we will create an amplitude map over the Truncation horizon for the Energy_Ratio_1
maximum output. To do this, click the Processes tab and select Slice Processing>Create
Data Slice.

At the top of the menu, select Energy_Ratio_1 maximum as the Input, select Truncation for
the Target Event, Exactly on target for the Window Option and call the output
ER_1_Truncation.

November 2018
HRS 10.6 Volume Attributes Guide 128

The completed menu will appear as shown below:

Click OK and, when the map appears, right-click the color bar to modify the range by selecting
Modify Range, selecting the Grey Scale (Inverted) color bar and then reversing the color bar by
selecting Reverse Colors. The result looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 129

Next, we will create an amplitude map for the Energy_Ratio_2 maximum output. Click the
Processes tab and select Slice Processing>Create Data Slice. Select Energy_Ratio_2
maximum as the Input, Truncation for the Target Event, Exactly on target for the Window
Option and call the output ER_2_Truncation. The completed menu is shown below:

Click OK and, when the map appears, right-click the color bar to modify the range by selecting
Modify Range, selecting the Grey Scale (Inverted) color bar and then reversing the color bar by
selecting Reverse Colors.

November 2018
HRS 10.6 Volume Attributes Guide 130

The result looks like this:

Let us now create the difference map. Select Slice Processing>Advanced Map Maths, select
the map slices ER_1_Truncation and ER_2_Truncation and give them the Variable Names
ER_1 and ER_2, as shown below. Then, select <Create New> to create a new Map Maths
script:

November 2018
HRS 10.6 Volume Attributes Guide 131

In the next menu, create the expression ER_1-ER_2, and then click Save As …:

Call the script ER_Diff and click OK:

November 2018
HRS 10.6 Volume Attributes Guide 132

Now, move to the bottom of the main menu. Fill out this part of the menu by calling the output
ER_Difference, and click OK:

The final map will look like this. Note that the faults are more clearly defined than on either of
the previous maps:

By using the default parameters, as we did in our first energy ratio example, we are assuming
that the best correlation is at zero dip, since the correlation cube only contains the zeroth-lag
correlation values. However, as pointed out by Marfurt et al. (1999), the image can be improved
by scanning over multiple dips in the inline and crossline directions.

To implement the Energy Ratio Attribute option using dip scans, again select Volume
Attributes>Energy Ratio Attribute, and fill out the menu as shown below, selecting F3_Block

November 2018
HRS 10.6 Volume Attributes Guide 133

as the input, calling the output ER_Dips, letting the number of energy ratio coherence volumes
equal 2 and defaulting the window parameters.

However, this time type in the Inline Dip from -4 to +4 ms/tr with a number of dips equal to 3,
and the Xline dip from -4 to +4 with a number of dips of 3. That is, we will scan over all
combinations of -4, 0 and +4 ms per trace in both the inline and crossline directions (9
combinations in all) and find the maximum cross-correlation value. Note that we used an
increment of 4 ms since this is the sample rate.

To start the process, click OK at the bottom of the menu:

November 2018
HRS 10.6 Volume Attributes Guide 134

When the processing is finished, you will see a display of the F3_Block input volume in View 1
and ER_Dips_1_Maximum, the first Principal Component, in View 2, as follows:

Let us now compare the results with and without dip scanning. Go to Project Data>Seismic and
drag-and-drop ER_Dips_1 maximum into View 1 using the default, which is to Replace data in
View. Next, drag-and-drop Energy_Ratio_1 maximum into View 2, again using the defaults on
the menu. After optimizing the color bars, the display should now look like this:

November 2018
HRS 10.6 Volume Attributes Guide 135

The results are quite similar for the first Principal Component. Let us next do the same
comparison for the second Principal Component. Again, go to Project Data>Seismic and drag-
and-drop ER_Dips_2 maximum into View 1 using the default, which is to Replace data in
View. Next, drag-and-drop Energy_Ratio_2 maximum into View 2, again using the defaults on
the menu. After optimizing the color bars, the display should now look like this:

Note that the display with dip scans is slightly better defined. Next, we will create an amplitude
map for the ER_Dips_1 maximum output. Click the Processes tab and select Slice
Processing>Create Data Slice. Select ER_Dips_1 maximum as the Input, Truncation for the
Target Event, Exactly on target for the Window Option and call the output
ER_1_Truncation. The completed menu is shown below:

November 2018
HRS 10.6 Volume Attributes Guide 136

Click OK and, after optimization of the color bar, the map looks like this:

Finally, we will create an amplitude map for the ER_Dips_2 maximum output. Click the
Processes tab and select Slice Processing>Create Data Slice. Select ER_Dips_1 maximum
maximum as the Input, Truncation for the Target Event, Exactly on target for the Window
Option and call the output ER_Dips_2_Truncation.

November 2018
HRS 10.6 Volume Attributes Guide 137

Click OK and, after optimization of the color bar, the map looks like this:

Note that the map results are slightly improved over the results without dip scan. But, in general,
the extra time computation involved in creating the dip-scanned result probably does not justify
the slight improvement. This is probably because the moving space/time window was so small.
For a larger moving window, the difference would probably be more noticeable.

November 2018
HRS 10.6 Volume Attributes Guide 138

Edge Enhancement Attributes

Edge enhancement attributes (Appendix 6) are a group of attributes that have been adapted from
edge detection algorithms used to analyse potential field data. To initiate this group of attributes,
select Edge Enhancement Attributes from the Volume Attributes list under Processes:

On the resulting menu, select F3_Block for the Input, call the output Edge, default the values in
the Work Window, and select all 7 attributes by clicking on their checkboxes, as shown below:

The first three attributes are the derivatives of the seismic volume in the x, y and vertical (time in
this case) directions, which written mathematically are df/dx, df/dy, and df/dz, where f is the
seismic volume (as shown in the menu). Although these are not the final attributes used for edge
enhancement, they are input to all subsequent edge enhancement attributes and so are interesting

November 2018
HRS 10.6 Volume Attributes Guide 139

to look at in their own right. We shall explain what these other attributes are as we display them.
So for now, simply click OK at the bottom of the menu:

After processing, the default display will show the input in View 1 and the inline or x derivative
in View 2, as shown below:

To see vertical slices of the other volumes, right-click the View 2 display and select Color Data
Volume. Note that any of the 7 computed attributes can be displayed:

November 2018
HRS 10.6 Volume Attributes Guide 140

Selecting Edge_dDy gives the following display:

And lastly selecting Edge_dDz gives the following display:

November 2018
HRS 10.6 Volume Attributes Guide 141

Total horizontal derivative (TDX)


These spatial derivatives may be combined to create more meaningful attributes. The first
attribute we shall examine is the Total horizontal derivative or TDX. The TDX is calculated
from the two horizontal spatial partial derivatives

2 2
 ∂f   ∂f 
TDX =   +   .
 ∂x   ∂y 

This is also known as the Sobel filter and is used in image processing to sharpen photographic
images, in seismic interpretation, TDX attribute emphasis the edges in seismic data. More details
are provided in Appendix 6: Total horizontal derivative. To display this volume attribute select
Edge_TDX. In following figure we have changed the color scheme to Intensity:

November 2018
HRS 10.6 Volume Attributes Guide 142

Tilt Angle (TT)


Selecting Edge_TT results in the following display:

The tilt angle, TT, was introduced by Miller and Singh (1994) and is a measure of the
normalized vertical derivative.

 
 ∂f   ∂f 
 ∂t   ∂t 
TT = arctan   = arctan  .
 TDX 
2
  ∂f  2  ∂f  
   +     
  ∂x   ∂y  

Note that the tilt angle measures the dip between the vertical derivative and the magnitude of the
horizontal gradient, or TDX. For more details see Appendix 6: Tilt Angle.

Normalized amplitude horizontal gradient


Next let’s select and display Edge_COSTHETA (or the Normalized amplitude horizontal gradient)
resulting in the following display:

November 2018
HRS 10.6 Volume Attributes Guide 143

The theta map was introduced by Wijns et al. (2005) and is mathematically defined as

2 2
 ∂f   ∂f 
  +  
 ∂x   ∂y 
cos θ = ,
2 2 2
 ∂f   ∂f   ∂f 
  +   +  
 ∂x   ∂y   ∂t 

2 2 2
 ∂f   ∂f   ∂f 
where the term   +   +   represents the amplitude of the analytic signal.
 ∂x   ∂y   ∂t 

November 2018
HRS 10.6 Volume Attributes Guide 144

Normalized standard deviation ratio of the derivatives


Finally, let’s select and display Edge_NSTD. Change the color bar to the Grey Scale
(Inverted):

This is the normalized standard deviation ratio of the derivatives (Cooper and Cowan, 2008) and
mathematically defined as

∂f
σ  
NSTD =  ∂t 
.
 ∂f   ∂f   ∂f 
σ  +σ  +σ 
 ∂x   ∂y   ∂t 

Total horizontal derivative of the tilt angle (THDR)


One attribute discussed in in Appendix 6 but not generated is the Total horizontal derivative of
the tilt angle or THDR (Verduzco et al., 2004)

November 2018
HRS 10.6 Volume Attributes Guide 145

2 2
 ∂T   ∂T 
THDR =   +   .
 ∂x   ∂y 

Although the THDR operation looks like a single derivative, it is actually seen to be a second
derivative of the original data, when combined with equation defining the TDX. Thus, as
pointed out by Cooper and Cowan (2008), this operator can enhance the noise of the data is you
are not careful with the initial processing.

In practice the THDR is generated from the tilt angle, TT, and the magnitude of the horizontal
derivative, TDX. The flow is described in the note below:

To compute the final edge enhancement attribute, the Tilt Angle Horizontal Gradient, or THDR,
again select Edge Enhancement Attributes from the Volume Attributes list under Processes.
On the menu, select Edge_TT for the Input, call the output THDR, default the values in the
Work Window, and select Amplitude Horizontal Gradient by clicking on its checkbox:

November 2018
HRS 10.6 Volume Attributes Guide 146

Click OK and the final attribute, THDR, will be displayed next to the input:

Each of the 8 attributes that we have displayed in vertical cross-section shows us a different
aspect of the seismic volume. However, to get a better feeling for what the attributes are telling
us, we need to look at structural and time slices over each of the attributes. To do this, we will
focus on both the Truncation picked seismic event, and a time slice at 1300 ms.

For each of these slices, we will create the attributes in the following logical order:

1. The in-line derivative.


2. The cross-line derivative.
3. Total horizontal derivative, or TDX, since it is the square root of the sum of the squares of
the previous two attributes.
4. The vertical derivative.
5. Tilt angle, TT, since it is the arctangent of the vertical derivative over the total horizontal
derivative.
6. The theta map, or COSTHETA, since it is the ratio of the total horizontal derivative over
the total derivative, which is calculated using the square root of the sum of the squares of
all three derivatives.
7. The normalized standard deviation of the derivatives, or NSTD.
8. The total vertical derivative of the tilt angle, or THDR.

We will start with the time slice at 1300 ms and, before computing the attributes, look at the slice
through the F3_Block itself.

November 2018
HRS 10.6 Volume Attributes Guide 147

To do this, click the Processes tab and select Slice Processing>Create Data Slice. Select
F3_Block as the Input, Exactly on target for the Window Option, a Constant of 1300 ms, and
a Base name of F3_1300. The completed menu is shown below:

Note that we will use the parameters in this menu for the next set of slices, so will not show this
menu again. The only changes will be the Input and the Base name.

November 2018
HRS 10.6 Volume Attributes Guide 148

Click OK and, after using the Modify Range option to turn off the Normalized color scale and
perform a Default Scan, and used Modify Color Scheme to change the color scheme to Green
to Brown the result is as shown next:

Although the above color scale highlights the positive and negative amplitudes on the slice,
many of the following attributes only have positive amplitudes, so to highlight these we will be
using a grey scale. For comparison with the seismic time slice, use the Modify Color Scheme
option to change the color scheme to Grey Scale (inverted), as shown here:

(Is it just me, or does that look like a dinosaur skeleton?).

November 2018
HRS 10.6 Volume Attributes Guide 149

Next, let us take a slice at 1300 ms over the inline derivative, dDx. The result, after converting
to a normalized scale and grey level color scheme, looks like this:

We will now take a slice at 1300 ms over the cross-line derivative, dDy. The result, again after
converting to a normalized scale and grey level color scheme, looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 150

Note that both the inline and cross-line derivatives give the structures a type of 3D effect, lighted
from different directions.

Next, we will perform a slice at 1300 ms over the total horizontal derivative, or TDX, which is
the square root of the sum of the squares of the previous two attributes. The result after
converting to a normalized scale and grey level color scheme, looks like this:

We will now take a slice at 1300 ms over the vertical (time) derivative, dDz. The result, again
after converting to a normalized scale and grey level color scheme, looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 151

Note that the vertical derivative, just like the in-line and cross-line derivatives, has positive and
negative amplitudes which are seismic amplitude differences. However, the next two attributes
that we will display, the tilt angle and the theta map, have quite different scales.

First, we will take a 1300 ms slice of the Tilt angle, TT, which is the arctangent of the vertical
derivative over the total horizontal derivative. As an arctangent, the results will be in radians,
from –π/2 to +π/2 (that is -90 degrees to +90 degrees). Therefore, we will change to a
normalized amplitude scale and use the Phase color scheme, as shown here:

Next, we take a 1300 ms slice over the theta map, or COSTHETA, which is the ratio of the total
horizontal derivative over the total derivative, calculated using the square root of the sum of the
squares of all three derivatives.

November 2018
HRS 10.6 Volume Attributes Guide 152

The result, after changing to a normalized amplitude scale and grey level scale, is as follows:

Next, we take a 1300 ms slice over the normalized standard deviation of the derivatives, or
NSTD, volume. The result, after changing to a normalized amplitude scale and grey level scale,
is as follows:

November 2018
HRS 10.6 Volume Attributes Guide 153

Finally, we take a 1300 ms slice over the total vertical derivative of the tilt angle, or THDR. The
result, after changing to a normalized amplitude scale and grey level scale, is as follows:

Notice that this result probably gives the best definition of the structures.

Next, let us look at some slices over the Truncation interval, starting with the F3 seismic volume
itself.

To do this, click the Processes tab and select Slice Processing>Create Data Slice. Select
F3_Block as the Input, Exactly on target for the Window Option, the Event called
Truncation, and a Base name of F3_Truncation. The completed menu is shown below:

November 2018
HRS 10.6 Volume Attributes Guide 154

We will use the parameters in this menu for the next set of slices, so will not show this menu
again. The only changes will be the Input and the Base name.

Click OK and, after using the Modify Range option to turn off the Normalized color scale and
perform a Default Scan, and used Modify Color Scheme to change the color scheme to Green
to Brown the result is as shown next:

As with the time slice shown previously, the above color scale highlights the positive and
negative amplitudes on the structure slice, but many of the following attributes only have
positive amplitudes, so to highlight these we will be using a grey scale. For comparison with the
seismic time slice, use the Modify Color Scheme option to change the color scheme to Grey
Scale (inverted), as shown here:

November 2018
HRS 10.6 Volume Attributes Guide 155

Next, let us take a slice at the Truncation event over the inline derivative, dDx. The result, after
converting to a normalized scale and grey level color scheme, looks like this:

We will now take a slice at the Truncation event over the crossline derivative, dDy. The result,
after converting to a normalized scale and grey level color scheme, looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 156

When you compare the two maps slices from the inline and crossline derivatives, note that
structures in different directions have been enhanced.

Next, we will perform a slice at the Truncation event over the total horizontal derivative
volume, or TDX, which is the square root of the sum of the squares of the previous two
attributes. The result after converting to a normalized scale and grey level color scheme looks
like this (note also that the automatic values from the default scan have not been used, but the
maximum amplitude has been set to 1000):

Note that the TDX result in now emphasizing the events that look like true structure across the
map surface. We will now take a slice at the Truncation event over the vertical, or time,
derivative, dDz. The result, after converting to a normalized scale and grey level color scheme,
looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 157

Next, we will take a slice at the Truncation event of the Tilt angle, TT, which is the arctangent
of the vertical derivative over the total horizontal derivative. As an arctangent, the results will be
in radians, from –π/2 to +π/2 (that is -90 degrees to +90 degrees). Therefore, we will change to a
normalized amplitude scale and use the Phase color scheme, as shown here:

Next, we take a slice at the Truncation event over the theta map, or COSTHETA, which is the
ratio of the total horizontal derivative over the total derivative, calculated using the square root of
the sum of the squares of all three derivatives. The result, after changing to a normalized
amplitude scale and grey level scale, is as follows:

November 2018
HRS 10.6 Volume Attributes Guide 158

Now we will take a slice at the Truncation event over the normalized standard deviation of the
derivatives, or NSTD, volume. The result, after changing to a normalized amplitude scale and
grey level scale, is as follows:

The NSTD attribute appears to work much better on a structure slice than on our earlier time
slice, in which the amplitudes appeared quite random. Finally, we take a slice at the Truncation
event over the total vertical derivative of the tilt angle, or THDR. The result, after changing to a
normalized amplitude scale and grey level scale, is as follows:

Of all of the attributes we have considered so far, this one appears to have defined the structure
the best.

November 2018
HRS 10.6 Volume Attributes Guide 159

Curvature methods

Introduction
Curvature methods are based on a paper by Roberts (2001) in which he used differential
geometry to show that the curvature of some event in the earth’s subsurface could be estimated
from a time structure map by fitting the local quadratic surface to this map. The full mathematics
for curvature is developed in Appendix 7, but the following figure, taken from Appendix 7 and
the Roberts (2001) paper, illustrates the concept in simple terms:

Sign convention for curvature, where the arrows represent normal vectors to the surface (from Roberts, 2001).

When we extend our structure to three dimensions, we can then define four basic types of
curvature:

Kmin = the minimum curvature,

Kmax = the maximum curvature,

Kd = the curvature in the dip direction, and

Ks = the curvature in the strike direction.

In addition to these four types of curvature, there are also two other important curvatures that are
defined from the minimum and maximum curvatures, Km, the mean curvature, which is the
average of the two, and Kg, the Gaussian curvature, which is the product of the two:

Km = (Kmax + Kmin)/2, and

Kg = Kmax Kmin.

November 2018
HRS 10.6 Volume Attributes Guide 160

In Appendix 7, it is shown that using mean and Gaussian curvature, as well as minimum and
maximum curvature, gives us a way of classifying structures. In summary, for Gaussian
curvature greater than or equal to zero, the mean curvature tells us whether our structure is
concave or convex, and Gaussian curvature less than zero tells us that we have a saddle structure.

In Appendix 7, it is also shown that all of the curvature attributes can be computed by finding the
six coefficients a through f in the quadratic equation given by:

z ( x, y ) = ax 2 + by 2 + cxy + dx + ey + f .

The a, b, and c coefficients define an ellipsoid and the d, e and f coefficients define a dipping
plane. These coefficients were estimated by Roberts (2001) over a structure map by using a
moving 3x3 grid.

As shown in Appendix 7, the maximum, minimum, dip, strike, mean and Gaussian curvatures
discussed above are a function of the first five coefficients in equation 1 (that is, a through e, but
the constant f never enters into the formulations because since derivatives are used to generate
the terms, the f term cancels). However, there are two other attributes that are only a function the
in-line and cross-line dip terms d and e. These are the total dip γ, given by the arctan of the
square root of the sum of the squares of d and e, and the azimuth α, given by the arctan of the
ratio of e and d. If the inline and crossline dips d and e are both set to zero, Kmax and Kmin then
simplify to what are called most positive curvature K+ and most negative curvature K−.

Another curvature included is contour curvature, Kc, which is equal to the strike curvature
multiplied by the sum of the squares of the dip. Finally, Koenderink and van Doorn (1992)
proposed the curvedness attribute, Kn, which is defined as the harmonic mean of Kmax and Kmin.

The one curvature attribute from Roberts (2001) that has not been included under Volume
Curvature is the shape index, Si, which is written as

2  K + K min 
Si = tan −1  max .
π  K max − K min 

The shape index can be computed using trace maths once the maximum and minimum curvatures
have been computed.

Klein et al. (2008) extended the method described in Roberts (2001) to 3D by using cross-
correlations between traces. We will refer to this method as volume curvature.

Barnes (1996) introduced two new spatial instantaneous attributes that extended the
instantaneous frequency concept and showed how these attributes could be used to find both the
dips and the azimuth of events in the subsurface. Al-Dossary and Marfurt (2006) then showed
how to use these concepts to create a curvature estimate from instantaneous attributes. We call
this approach instantaneous curvature.

We will now compute volume curvature on the F3 Block.

November 2018
HRS 10.6 Volume Attributes Guide 161

Volume Curvature
To start the volume curvature set of attributes, select Volume Curvature from the Volume
Attributes list under Processes:

On the resulting menu, select F3_Block for the Input, call the output VC (for volume curvature),
select Cross Correlate with neighbours, a 7 trace rolling window, a Correlation Window
Length of 50 ms, a Maximum shift of 25 ms, and select all 12 attributes:

November 2018
HRS 10.6 Volume Attributes Guide 162

After the processing has been completed, the input F3_Block cross-section is shown in View 1,
and all of the curvature attributes are able to be displayed in View 2, with the default display
being VC_xcor, the cross-correlation amplitudes:

Although the cross-correlation amplitudes are not used in the curvature calculations, this display
is useful for giving us confidence in the cross-correlation times (high correlation is shown as
red). To look at the cross-correlation times that are used in the curvature calculations, right-click
the plot area in View 2 and select Color Data Volume>VC_times to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 163

Note that the time scale on the right goes from roughly +4 ms to -4 ms. These correlation times
were used to calculate all of the curvature attributes using the calculations explained in the
theory. Let’s next look at the curvature, starting with the mean curvature, Km, by right-clicking
on the plot area in View 2 and selecting Color Data Volume>VC_Km to see this display:

Next, we will display the Gaussian curvature, Kg, by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Kg to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 164

Next, we will display the maximum curvature, Kmax, by right-clicking on the plot area in View 2
and selecting Color Data Volume>VC_Kmax to see this display:

Next, we will display the minimum curvature, Kmin , by right-clicking on the plot area in View 2
and selecting Color Data Volume>VC_Kmin to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 165

Next, we will display the most positive curvature, K+ , by right-clicking on the plot area in View
2 and selecting Color Data Volume>VC_Kpositive to see this display:

Next, we will display the most negative curvature, K_, by right-clicking on the plot area in View
2 and selecting Color Data Volume>VC_Knegative to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 166

Next, we will display the dip curvature, Kd , by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Kd to see this display:

Next, we will display the strike curvature, Ks , by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Ks to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 167

Next, we will display the contour curvature, Kc, by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Kc to see this display:

Next, we will display the curvedness, Kn, by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Kn to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 168

Next, we will display the dip curvature, Kdip, by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Kdip_max to see this display:

Next, we will display the azimuth curvature, Kaz, by right-clicking on the plot area in View 2 and
selecting Color Data Volume>VC_Kazimuth_dip_max to see this display:

November 2018
HRS 10.6 Volume Attributes Guide 169

To fully understand the effects of the curvature attributes on the F3 Block we need to look at a
spatial slice through the data. Thus, we will create amplitude maps over the Truncation horizon
for each of the curvature attributes, starting with the cross-correlation amplitudes, VC_xcor. To
do this, click the Processes tab and select Slice Processing>Create Data Slice.

At the top of the menu, select VC_xcor as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_xcor_Truncation.

November 2018
HRS 10.6 Volume Attributes Guide 170

Click OK and the resulting map will be shown as seen on the next page. Note that a Red to Blue
color scheme has been used.

Next, we will create an amplitude map over the Truncation horizon for the cross-correlation
times, VC_times. To do this, click the Processes tab and select Slice Processing>Create
Data Slice. At the top of the menu, select VC_times as the Input, select Truncation for the
Target Event, Exactly on target for the Window Option and call the output
VC_times_Truncation.

The output looks like this:

November 2018
HRS 10.6 Volume Attributes Guide 171

Next, we will create an amplitude map over the Truncation horizon for the mean curvature,
VC_Km. To do this, click the Processes tab and select Slice Processing>Create Data Slice.
At the top of the menu, select VC_Km as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_Km_Truncation. The output
looks as follows:

Next, we will create an amplitude map over the Truncation horizon for the Gaussian curvature,
VC_Kg. To do this, click the Processes tab and select Slice Processing>Create Data Slice.
At the top of the menu, select VC_Kg as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_Kg_Truncation:

November 2018
HRS 10.6 Volume Attributes Guide 172

Next, we will create an amplitude map over the Truncation horizon for the maximum curvature,
VC_Kmax. To do this, click the Processes tab and select Slice Processing>Create Data
Slice. At the top of the menu, select VC_Kmax as the Input, select Truncation for the Target
Event, Exactly on target for the Window Option and call the output VC_Kmax_Truncation:

ax

Next, we will create an amplitude map over the Truncation horizon for the minimum curvature,
VC_Kmin. To do this, click the Processes tab and select Slice Processing>Create Data Slice.
At the top of the menu, select VC_Kmin as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_Kmin_Truncation:

November 2018
HRS 10.6 Volume Attributes Guide 173

Next, we will create an amplitude map over the Truncation horizon for the contour curvature,
VC_Kc. To do this, click the Processes tab and select Slice Processing>Create Data Slice.
At the top of the menu, select VC_Kc as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_Kc_Truncation:

Next, we will create an amplitude map over the Truncation horizon for the dip curvature,
VC_Kd. To do this, click the Processes tab and select Slice Processing>Create Data Slice.
At the top of the menu, select VC_Kd as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_Kd_Truncation:

Next, we will create an amplitude map over the Truncation horizon for the most negative
curvature, VC_Kneg. To do this, click the Processes tab and select Slice Processing>Create

November 2018
HRS 10.6 Volume Attributes Guide 174

Data Slice. At the top of the menu, select VC_Kneg as the Input, select Truncation for the
Target Event, Exactly on target for the Window Option and call the output
VC_Kneg_Truncation:

Next, we will create an amplitude map over the Truncation horizon for the most positive
curvature, VC_Kpos. To do this, click the Processes tab and select Slice Processing>Create
Data Slice. At the top of the menu, select VC_Kpos as the Input, select Truncation for the
Target Event, Exactly on target for the Window Option and call the output
VC_Kpos_Truncation:

November 2018
HRS 10.6 Volume Attributes Guide 175

Next, we will create an amplitude map over the Truncation horizon for the azimuthal curvature,
VC_Kaz. To do this, click the Processes tab and select Slice Processing>Create Data Slice.
At the top of the menu, select VC_Kaz as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output VC_Kaz_Truncation:

Next, we will create an amplitude map over the Truncation horizon for the maximum dip
curvature, VC_Kdip_max. To do this, click the Processes tab and select Slice
Processing>Create Data Slice. At the top of the menu, select VC_Kdip_max as the Input,
select Truncation for the Target Event, Exactly on target for the Window Option and call the
output VC_Kdip_max_Truncation:

November 2018
HRS 10.6 Volume Attributes Guide 176

Another useful way to visualize the generated attribute is to overlay it on the original seismic
data. This can be done with the help of Advanced 3D Visualization. To do this, select Advanced
3D Visualization from the Plugin option under Processes:

The blank Visualization window then appears, with the Seismic Volume Select option in the
upper left hand part of the window:

For the first volume, select as input the F3_Block. You will be prompted to convert the SEG-Y
volume to a LDM volume to reduce memory requirements. Click Yes:

November 2018
HRS 10.6 Volume Attributes Guide 177

After the first volume has been converted, select Add Seismic Volume from the Seismic
Volumes Selection menu to the left of the 3D volume as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 178

Next, add as the second seismic volume the computed VC_Kpositive volume as shown below.
After the program converts this from SEG-Y to the LDM format, the display will look similar to
the image below. Note that the VC_Kpositive covers the original seismic since currently is
opaque.

Before changing the transparency we will change the orientation and display of the volume. Put
the mouse on the top of the visualization panel and right-click the mouse and select the options.

November 2018
HRS 10.6 Volume Attributes Guide 179

Go to General dialog, and change the Z scaling in the General option of View Parameters to
maximum value 10, as shown below:

Next, go to Seismic (from F3_Block(LDM)) tab, and change the palette to Grey Scale
inverted.

November 2018
HRS 10.6 Volume Attributes Guide 180

Right-click the mouse and select the Options for the VC_Kpositive (LDM) Seismic volume.
Make sure the color for the seismic data and the generated attribute are different.

Enlarge the View Parameters dialog by clicking the button in the red box.

November 2018
HRS 10.6 Volume Attributes Guide 181

To make the attribute transparent select the Seismic (from VC_Kpositive (LDM)) tab, and drag
the mouse through the palette as shown below.

Select both the seismic and attribute in the Seismic Volumes Selection to visualize the seismic
and attribute simultaneously.

November 2018
HRS 10.6 Volume Attributes Guide 182

In this case, the volume curvature (VC_Kpositive) highlights the subtle discontinuity information
in the seismic (F3_block). The large fault can be easily seen in the seismic. The combined image
helps us interpret the attribute and seismic amplitude simultaneously.

Instantaneous Curvature
To start the instantaneous curvature set of attributes, select Instantaneous Curvature from the
Volume Attributes list under Processes:

On the menu, note that the creation of instantaneous curvature is a two-step procedure. In Step
1 we calculate instantaneous dip and azimuth, and in Step 2 the curvature attributes. We will use
the F3_Block as the input and call the output IC, for instantaneous curvature. Default the
window to 7 inline by 7 crossline traces, select Step 1: Calculate Instantaneous Dip and
Azimuth, select Instantaneous X-dip p, Instantaneous Y-dip q, Instantaneous Azimuth and
Instantaneous True Dip as the Outputs, as shown below, and click OK:

November 2018
HRS 10.6 Volume Attributes Guide 183

When the result have been calculated, the input F3_Block volume is shown in View 1, and the
four attributes in View 2, with the default display of IC_pDip, the inline dip. Right-click the
color bar, select Modify Range… and change the Lower Value to -0.2 and Upper Value to 0.2,
and click OK. The result will look like this:

Next, we will display the cross-line dip by right-clicking on the plot area in View 2 and selecting
Color Data Volume>IC_qDip to see this display (the previous color bar will still work) :

November 2018
HRS 10.6 Volume Attributes Guide 184

Next, display the azimuth volume by right-clicking on the plot area in View 2 and selecting
Color Data Volume>IC_iAzimuth. Right-click the color bar, select Modify Range and select
the Default Scan and OK buttons on the menu to get the true azimuth values from 0 to 360
degrees. Then, use Modify Color Scheme to change the color scheme to Phase. The result
should look like this:

Finally, we will display the total dip volume by right-clicking on the plot area in View 2 and
selecting Color Data Volume>IC_iDip. Right-click the color bar, select Modify Range and
change the Lower Value to 0.05 and Upper Value to 0.5, and click OK. Also, use Modify Color
Scheme to change the color scheme to Intensity. The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 185

Before proceeding to Step 2 and computing the instantaneous curvature attributes, we will create
amplitude maps over the Truncation horizon through the dip and azimuth volumes. We will start
with the p dip, or the dip in the inline direction, IC_pDip. To do this, click the Processes tab
and select Slice Processing>Create Data Slice.

At the top of the menu, select IC_pDip as the Input, select Truncation for the Target Event,
Exactly on target for the Window Option and call the output IC_pDip_Truncation.

Then, click OK at the bottom of the menu:

November 2018
HRS 10.6 Volume Attributes Guide 186

The resulting map will look like this using the default color bar and normalized scale. Note that
the structures are well-defined and of a higher frequency nature than with volume curvature:

Next, we will create an amplitude map over the Truncation horizon for the q dip, or the dip in
the crossline direction, IC_qDip. To do this, click the Processes tab and select Slice
Processing>Create Data Slice. At the top of the menu, select IC_qDip as the Input, select
Truncation for the Target Event, Exactly on target for the Window Option and call the output
IC_qDip_Truncation. The completed map will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 187

Next, create an amplitude map over the Truncation horizon for the total dip, IC_iDip. To do this,
click the Processes tab and select Slice Processing>Create Data Slice. At the top of the
menu, select IC_iDip as the Input, select Truncation for the Target Event, Exactly on target
for the Window Option and call the output IC_iDip_Truncation. The map will look like this:

Finally, we will create an amplitude map over the Truncation horizon for the azimuth,
IC_iAzimuth. To do this, click the Processes tab and select Slice Processing>Create Data
Slice. At the top of the menu, select IC_iAzimuth as the Input, select Truncation for the Target
Event, Exactly on target for the Window Option and call the output IC_iAzimuth_Truncation.
Right-click the color bar, select Modify Range and select the Default Scan and OK buttons on
the menu to get the true azimuth values from 0 to 360 degrees. Then, use Modify Color Scheme
to change the color scheme to Phase. The completed map will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 188

Direct calculation on instantaneous attributes induces lots of spikes; these spikes are extremely
dominant and make the outputs hard to interpret. HRS 10.3 implements a weight strategy to
enhance the interpretable of instantaneous attributes.

Now, calculate these same instantaneous attributes using weight strategy and compare. Note that
the weights for the instantaneous kx and instantaneous ky can’t be larger than the corresponding
rolling window size. Use F3_Block as the input and call the output IC_Weight. Click the
checkbox of “Use Weights to Reduce Spiking in Instantaneous Attributes”, and use the default
weights 5 for instantaneous kx (inline direction), instantaneous ky (xline direction) and
instantaneous frequency (time direction).

November 2018
HRS 10.6 Volume Attributes Guide 189

Compare the non-weight and weight pDip attributes by putting IC_pDip in View 2 and
IC_Weight_pDip in View 3. pDip is apparent dip from inline direction, the weight strategy
effectively reduces the spikes and smooth the output. This is also happened to the qDip (apparent
dip from xline direction).

Next, we compare true dip attribute by putting IC_iDip in View 2 and IC_Weight iDip in View
3. Like apparent dip (pDip), the weight strategy (View 3) effectively reduces the spikes
compared with View 2, as well as highlighting the main geologic features.

November 2018
HRS 10.6 Volume Attributes Guide 190

Putting IC_iAzimuth in View 2 and IC_Weight_iAzimuth in View 3 for comparison. The


azimuth form weight strategy is more robust and easier to interpret. Take an example; the
azimuth of the main event in the black circle area keeps constant in the weight strategy output
(View 3); however it seems really noisy if non-weight used (View 2).

Now, we will continue to Step 2 and compute the curvature attributes from the input p and q
dips. Go to the main Instantaneous Curvature menu and change the Input to IC_Weight_pDip
and keep the Output name as IC_Weight, for Instantaneous Curvature Weight. Then, select
Step 2: Calculate Instantaneous Curvature Attributes: p and q attributes must be
calculated first, as shown at the top of the menu.

November 2018
HRS 10.6 Volume Attributes Guide 191

Note that the weight strategy is only for computing instantaneous wavenumbers and
instantaneous frequency; therefore, the weight strategy checkbox will become gray when user
selects Step 2.

Next, change the Instantaneous Y-dip q attribute volume: to IC_Weight_qDip and select the
11 curvature attributes (Kgauss, Kmin , Knegative, Kstrike, Kshape, Kmeam , Kmax , Kpositive, Kdip, Kcurvedness,
and Kazimuth) as outputs. Finally, click OK, as shown below:

November 2018
HRS 10.6 Volume Attributes Guide 192

When the results have been calculated, the IC_Weight_pDip input volume is shown in View 1,
and the 11 attributes in View 2, with the default display of IC_Weight_kMean. To compare the
attributes with the original F3_Block input, select Project Data>Seismic and drag-and-drop the
F3_Block volume into View 1 and select Inline as 300 to compare. You will also note that the
Kmean attribute highlights the fault feature even clearly. The result will look like this:

Next, display the Kgauss volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kGauss. The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 193

Next, display the Kmax volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kMax. The result will look like this:

Next, display the Kmin volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kMin. The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 194

Next, display the Kpositive volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kPos. The result will look like this:

Next, display the Knegative volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kNeg. The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 195

Next, display the Kstrike volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kStrike. The result will look like this:

Next, display the Kdip volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kDip. The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 196

Next, display the Kcurvedness volume by right-clicking on the plot area in View 2 and selecting
Color Data Volume>IC_Weight_kCurvedness. The result will look like this:

Next, display the Kshape volume by right-clicking on the plot area in View 2 and selecting Color
Data Volume>IC_Weight_kShape. The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 197

Finally, display the Kazimuth volume by right-clicking on the plot area in View 2 and selecting
Color Data Volume>IC_Weight_kAzimuth Right-click the color bar, select Modify Range and
select the Default Scan and OK buttons on the menu to get the true azimuth values from -90 to
90 degrees.

Then, use Modify Color Scheme to change the color scheme to Phase. The result should look
like this:

We will next create amplitude maps over the Truncation horizon through the instantaneous
curvature attribute volumes.

We will start with the Kmean attribute, IC_Weight_kMean. To create the slice, click the
Processes tab and select Slice Processing>Create Data Slice. At the top of the menu, select
IC_kMean as the Input, select Truncation for the Target Event, Exactly on target for the
Window Option and call the output IC_Weight_kMean_Truncation.

November 2018
HRS 10.6 Volume Attributes Guide 198

The menu will look as below.

Click OK to start the process.

When the map is displayed, right-click the color bar, select Modify Range and change the
Lower Value to -1 and Upper Value to 2, and select the Standardized color key then click OK.
And also change the color to Grey Scale (Inverted). The result will look like this:

November 2018
HRS 10.6 Volume Attributes Guide 199

Next, create the Kpositive attribute slice, IC_Weight_kPos. To create the slice, click the
Processes tab and select Slice Processing>Create Data Slice. At the top of the menu, select
IC_Weight_kPos as the Input, select Truncation for the Target Event, Exactly on target for
the Window Option and call the output IC_kPos_Truncation.

When the map is displayed, right-click the color bar, select Modify Range and change the
Lower Value to -1 and Upper Value to 2, and select the Standardized color key then click OK.
And also change the color to Grey Scale (Inverted). The result will look like this:

We recommend user to continue to create slice from the other instantaneous curvature attributes.
Note that user can still use un-weight p and q dips as inputs for Step 2: Calculate
Instantaneous Curvature Attributes: p and q attributes must be calculated first. Although
the un-weight instantaneous curvature attributes will be trend to spiky, these attributes would
show higher resolution when carefully setting the color range.

November 2018
HRS 10.6 Volume Attributes Guide 200

Phase Congruency

Phase congruency is yet another attribute used to identify fractures and faults on seismic data.
The phase congruency algorithm was initially developed to detect corners and edges on two-
dimensional (2D) digital images (Kovesi, 1996), and is fully described in Appendix 8. To start
the phase congruency attribute, select Phase Congruency from the Volume Attributes list
under Processes:

Then, fill out the menu as shown below, inputting the F3 Block and using the default parameters:

November 2018
HRS 10.6 Volume Attributes Guide 201

Click OK and the phase congruency algorithm will start running, giving the following message
to show you its progress:

When finished, the result will look like this on a vertical section:

November 2018
HRS 10.6 Volume Attributes Guide 202

However, the best way to see the effects of phase congruency is on a structure or time slice, so
we will next create an amplitude map over the Truncation horizon through this attribute.

To do this, click the Processes tab and select Slice Processing>Create Data Slice. At the top
of the menu, select Phase_Cong as the Input, select Truncation for the Target Event, Exactly
on target for the Window Option and call the output Phase_Cong_Truncation, as shown here:

November 2018
HRS 10.6 Volume Attributes Guide 203

The result will is shown on the next page.

Contrast this result with those we have seen from the other attributes described in this guide, like
energy ratio, curvature, etc.

This brings us to the end of our practical application of volume attributes to the F3 Block data.
What follows is a set of Appendices on the theory of each attribute.

November 2018
HRS 10.6 Volume Attributes Guide 204

References

Al-Dossary, S., and K. J. Marfurt, 2006, 3D volumetric multispectral estimates of reflector


curvature and rotation: Geophysics, 71, 41–51.

Al-Dossary, S., and K. J. Marfurt, 2003, Fracture preserving smoothing, 73rd Annual
International Meeting, SEG.

Araman, A., and Paternoster. B, 2014, Seismic quality monitoring during processing: First break,
32, 69-78.

Araman, A., Paternoster. B, Isakov, D. and Shchukina, N., 2012, Seismic quality monitoring
during processing: what should we measure? 82th SEG Annual Meeting.

Bahorich, M. S., and S. L. Farmer, 1995, 3-D seismic discontinuity for faults and stratigraphic
features, The coherence cube: The Leading Edge, 16, 1053–1058.

Barnes, A. E., 1996, Theory of two-dimensional complex seismic trace analysis: Geophysics, 61,
264–272.

Barnes, A.E., 2007, A tutorial on complex seismic trace analysis: Geophysics, 72, no. 6, W33-
W43.

Barnes, A.E., 2000, Weighted average seismic attributes: Geophysics, 65, no. 1, 275-285.

Battista, B.M., C. Knapp, T. McGee, and V. Goebel, 2007, Application of the empirical mode
decomposition and Hilbert-Huang transform to seismic reflection data: Geophysics, 72, no. 2,
H29-H37.

November 2018
HRS 10.6 Volume Attributes Guide 205

Bekara, M., and M. Van der Baan, 2009, Random and coherent noise attenuation by empirical
mode decomposition: Geophysics, 74, no. 5, V89-V98.

Boashash, B., and M. Mesbah, 2004, Signal Enhancement by Time-Frequency Peak Filtering:
IEEE transaction on signal processing, 52, no. 4, 929-937.

Castagna, J., S. Sun, and R. Siegfried, 2003, Instantaneous spectral analysis: Detection of low-
frequency shadows associated with hydrocarbons: The Leading Edge, 22, no. 120, 120-127.

Chakraborty, A., and D. Okaya, 1995, Frequency-time decomposition of seismic data using
wavelet-based methods: Geophysics, 60, no. 6, 1906-1916.

Chopra, S., and K.J. Marfurt, 2007a, Seismic Attributes for Prospect Identification and Reservoir
Characterization: SEG geophysical development series; no. 11, Society of Exploration
Geophysicists, Tulsa, Oklahoma.

Chopra, S. and K.J. Marfurt, 2007b, Curvature attribute applications to 3D seismic data, The
Leading Edge, v. 26, p. 404-414.

Chopra, S. and K. J. Marfurt, 2008a, Emerging and future trends in seismic attributes: The
Leading Edge, 27, 298–318, doi:10.1190/1.2896620.

Chopra, S. and K. J Marfurt, 2008b, Gleaning meaningful information from seismic attributes:
First Break, 43–53.

Chopra, S. and K.J. Marfurt, 2010, Integration of coherence and curvature images, The Leading
Edge, v. 29, p. 1092-1107.

Chopra, S. and K.J. Marfurt, 2012a, Which curvature is right for you? Part -1: Presented at the
2012 CSPG/CSEG/CWLS GeoConvention (extended abstract available on CSEG website).

November 2018
HRS 10.6 Volume Attributes Guide 206

Chopra, S. and K.J. Marfurt, 2012b, Which curvature is right for you? Part -2: Presented at the
2012 CSPG/CSEG/CWLS GeoConvention (extended abstract available on CSEG website).

Cohen, L., 1995, Time-Frequency Analysis: Prentice-Hall PTR.

Cook, J., Chandran, V. and Fookes, C., 2006, 3D face recognition using log-Gabor templates:
presented at the British Machine Vision Conference (BMVC), September 4-7, 2006, Edinburgh.

Cooper, G.R.J, and Cowan, D.R., 2008, Edge enhancement of potential-field data using
normalized statistics: Geophysics, 73, no. 2, H1–H4.

Daubechies, I., 1992, Ten Lectures on Wavelets, CBMS-NSF Regional Conference Series in
Applied Mathematics, 61, Soc. for Indust. And Appl. Math. (SIAM), Philadelphia, Pa.

Deering, R., and J.F. Kaiser, 2005, The use of a masking signal to improve empirical mode
decomposition, in Acoustics, Speech, and Signal Processing, 2005. Proceedings (ICASSP’05).

IEEE International Conference, IEEE, p. IV485-IV488.

Fehmers G., and C. Höcker, 2002, Fast structural Interpretation with structure-oriented filtering:
Geophysics, Vol. 68, pages 1286-1293.

Field, D., 1987, Relations between the statistics of natural images and the response properties of
cortical cells: Journal of the Optical Society of America, vol. 4, no. 12, pp 2379-2394.

Flandrin, P., G. Rilling, and P. Goncalves, 2004, Empirical Mode Decomposition as a Filter
Bank: IEEE Signal Processing Letters, 11, no. 2, 112-114.

Fomel, S., 2007, Local seismic attributes: Geophysics, 72, no. 3, A29-A33.

Gabor, D., 1946, Theory of communication, part I: J. Int. Elect. Eng., v. 93, part III, p. 429-441.

November 2018
HRS 10.6 Volume Attributes Guide 207

Gersztenkorn, A., and Marfurt, K. J., 1999, Eigenstructure based coherence computations as an
aid to 3D structural and stratigraphic mapping: Geophysics, 64, 1468–1479.

Gonzalez, R.C., and Woods, R.E., 2008, Digital Image Processing: Pearson Prentice Hall, Upper
Saddle River, New Jersey.

Hall, M, 2006, Resolution and uncertainty in spectral decomposition: First Break, 24, 43-47.

Han, J., and M. Van der Baan, 2011, Empirical Mode Decomposition and Robust Seismic
Attribute Analysis: 2011 CSPG CSEG CWLS Convention, 114.

Han, J., and M. Van der Baan, 2013, Empirical mode decomposition for seismic time-frequency
analysis: Geophysics, 78, O9–O19.

Herrera R. H., Han J. and M. van der Baan, 2014 Applications of the synchrosqueezing
transform in seismic time-frequency analysis. Geophysics, 79, V55-V64.

Höcker C., and G. Fehmers, 2002, Fast structural interpretation with structure-oriented filtering:
TLE, Vol. 21, No.3, pages 238-243.

Huang, J., and B. Milkereit, 2009, Empirical Mode Decomposition Based Instantaneous Spectral
Analysis and its Applications to Heterogeneous Petrophysical Model Construction, in
Frontiers+Innovation-2009 CSPG CSEG CWLS Convention,, p. 205–210.

Huang, N.E., 1999, A NEW VIEW OF NONLINEAR WATER WAVES: The Hilbert
Spectrum1: Annual Review of Fluid Mechanics, 103, no. 1, 417-457.

Huang, N.E., M.-L.C. Wu, S.R. Long, S.S.P. Shen, W. Qu, P. Gloersen, and K.L. Fan, 2003, A
confidence limit for the empirical mode decomposition and Hilbert spectral analysis: Proc. R.
Soc. London, Ser. A, no. 459, 2317-2345.

November 2018
HRS 10.6 Volume Attributes Guide 208

Huang, N.E., and Z. Wu, 2008, A review on Hilbert-Huang transform: Method and its
applications to geophysical studies: Reviews of Geophysics, 46, no. 2, RG2006.

Huang, N.E., Z. Wu, S.R. Long, K.C. Arnold, X. Chen, and B. Karin, 2009, On instantaneous
frequency: Advances in Adaptive Data Analysis, 1, no. 2, 177-229.

Huang, N.E., Z. Shen, S.R. Long, M.C. Wu, H.H. Shih, Q. Zheng, N.-C. Yen, C.C. Tung, and
H.H. Liu, 1998, The empirical mode decomposition and the Hilbert spectrum for nonlinear and
non-stationary time series analysis: Proceedings of the Royal Society A: Mathematical, Physical
and Engineering Sciences, 454, no. 1971, 903-995.

Jones I.F. and Levy, S., 1987, Signal-to-noise ratio enhancement in multichannel seismic data
via the Karhunen-Loéve transform: Geophysical Prospecting, 35, no. 1, 12-12.

Klein, P., L. Richard and H. James, 2008, 3D curvature attributes: a new approach for seismic
interpretation: First Break, 26, 105–111.

Koenderink, J.J. and van Doorn, A. J., 1992, Surface shape and curvature scales: Image and
Vision Computing, 10(8), 557-565.

Kovesi, P.D., 1996, Invariant Measures of Feature Detection: Ph.D. thesis, The University of
Western Australia.

Kovesi, P.D., 2003, Phase congruency detects corners and edges: Proceedings of the Seventh
Australasian Conference on Digital Image Computing Techniques and Applications
(DICTA’03).

Kramer, H.P. and Mathews, M.V., 1956, A linear coding for transmitting a set of correlated
signals, IRE Transactions on Information Theory IT-2, 4146.

Kumar, P., and E. Foufoula-Georgiou, 1997, Wavelet analysis for geophysical applications:
Reviews of Geophysics. , no. 35, 385-412.

November 2018
HRS 10.6 Volume Attributes Guide 209

Li, Y., and X. Zheng, 2008, Spectral decomposition using Wigner-Ville distribution with
applications to carbonate reservoir characterization: The Leading Edge, 27, 1050-1057.

Liu, G., S. Fomel, and X. Chen, 2011, Time-frequency analysis of seismic data using local
attributes: Geophysics, 76, no. 6, P23-P34.

Liu, J., and K.J. Marfurt, 2006, Thin bed thickness prediction using peak instantaneous
frequency: SEG Expanded Abstracts, no. 4, 968-972.

Liu, J., and K.J. Marfurt, 2007, Instantaneous spectral attributes to detect channels: Geophysics,
72, no. 2, P23-P31.

Luo, Y., W. G. Higgs, and W. S. Kowalik, 1996, Edge detection and stratigraphic analysis using
3-D seismic data: 66th Annual International Meeting, SEG, Expanded Abstracts, 324–327.

Mallat, S., 2008, A Wavelet Tour of Signal Processing, The Sparse Way, 3rd ed., Acad. Press,
Burlington, Mass.

Magrin-Chagnolleau, I., and R.G. Baraniuk, 1999, Empirical mode decomposition based time-
frequency attributes, in Proc. 69th SEG Meeting, Houston, p.1-4.

Mallat, S., 2008, A wavelet tour of signal processing: The sparse way. Academic Press, 98-99.

Marfurt, K.J., and R.L. Kirlin, 2001, Narrow-band spectral analysis and thin-bed tuning:
Geophysics, 66, no. 4, 1274-1283.

Marfurt, K. J., 2006, Robust estimates of 3D reflector dip: Geophysics, 71, 29-40.

Marfurt, K. J., and R. L. Kirlin, 2000, 3D broadband estimates of reflector dip and amplitude:
Geophysics, 65, 304–320.

November 2018
HRS 10.6 Volume Attributes Guide 210

Marfurt, K. J., R. L. Kirlin, S. H. Farmer, and M. S. Bahorich, 1998, 3D seismic attributes using
a running window semblance-based algorithm: Geophysics, 63, 1150–1165.

Marfurt, K. J., V. Sudhakar, A. Gersztenkorn, K. D. Crawford, and S. E. Nissen, 1999,


Coherency calculations in the presence of structural dip: Geophysics 64, 104–111.

Miller, H. G., and V. Singh, 1994, Potential field tilt — A new concept for location of potential
field sources: Journal of Applied Geophysics, 32, 213–217.

Mitasova, H. and Hofierka, J., 1993, Interpolation by regularised spline with tension : II.
Application to terrain modeling and surface geometry analysis: Mathematical Geology, 25, 657-
669.

Morlet, J., G. Arensz, and E. Fourgeau, 1982, Wave propagation and sampling theory-Part II:
Sampling theory and complex waves: Geophysics, 41, no. 2, 222-236.

Odebeatu, E., J. Zhang, M. Chapman, E. Liu, and X.Y. Li, 2006, Application of spectral
decomposition to detection of dispersion anomalies associated with gas saturation: The Leading
Edge, 25, 206-210.

Partyka, G., Gridley, J., and Lopez, J., 1999, Interpretational applications of spectral
decomposition in reservoir characterization: The Leading Edge, 20, 353–360.

Reine, C., M. Van der Baan, and R. Clark, 2009, The robustness of seismic attenuation
measurements using fixed- and variable-window time-frequency transforms: Geophysics, 74, no.
2, WA123-WA135.

Reine C., Clark R. and Van der Baan M. 2012a, Robust prestack Q-determination using surface
seismic data: I - Method and synthetic examples. Geophysics, 77 , no.1, R45-R56.

November 2018
HRS 10.6 Volume Attributes Guide 211

Reine C., Clark R. and Van der Baan M. 2012b, Robust prestack Q-determination using surface
seismic data: II - 3D Case study. Geophysics, 77, no.1, B1-B10.

Portniaguine O. and J. Castagna, 2004. Inverse spectral decomposition. 74th SEG annual
meeting. Expanded Abstracts.

Rich, J., 2013, Expanding the applicability of curvature attributes through clarification of

ambiguities in derivation and terminology, Expanded Abstract, SEG 2013, Houston.

Rioul, O., and M. Vetterli, 1991, Wavelets and signal processing: Signal Processing Magazine,
IEEE , 8, 14-38.

Roberts, A., 2001, Curvature attributes and their application to 3D interpreted horizons: First
Break, 19, 85–100.

Saha, J.G., 1987, Relationship between Fourier and instantaneous frequency, in 57th Annual
International Meeting, SEG, Expanded Abstracts, p. 591-594.

Stockwell, R.G., L. Mansinha, and R.P. Lowe, 1996, Localization of the complex spectrum: the
S transform: IEEE Transactions on Signal Processing, 44, no. 4, 998-1001.

Taner, M. T., F. Koehler, and R. E. Sheriff, 1979, Complex seismic trace analysis: Geophysics,
44, 1041–1063.

Tary, J.B., R. Herrera, J. Han and M. Van der baan. 2014, Spectral estimation – What is new?
What is next?: Reviews of Geophysics, 52, 723-749.

Torres, M.E., M.A. Colominas, G. Schlotthauer, and Patrick Flandrin, 2011, A complete
ensemble empirical mode decomposition with adaptive noise, in 2011 IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, p. 4144–4147.

November 2018
HRS 10.6 Volume Attributes Guide 212

Van der Baan, M., S. Fomel, and M. Perz, 2010, Nonstationary phase estimation: A tool for
seismic interpretation? The Leading Edge, 29, no. 9, 1020-1026.

Verduzco, B., J. D. Fairhead, C. M. Green, and C. MacKenzie, 2004, The meter reader—New
insights into magnetic derivatives for structural mapping: The Leading Edge, 23, 116–119.

Wijns, C., C. Perez, and P. Kowalczyk, 2005, Theta map: Edge detection in magnetic data:
Geophysics, 70, no. 4, L39–L43.

Wu, Z., and N.E. Huang, 2009, Ensemble Empirical Mode Decomposition: a Noise-Assisted
Data Analysis Method: Advances in Adaptive Data Analysis, 01, no. 01, 1-49.

November 2018
HRS 10.6 Volume Attributes Guide 213

Appendix 1 Theory of instantaneous attributes


As explained in the guide, instantaneous attributes are computed from the complex trace, C(t),
which is composed of the seismic trace, s(t) and its Hilbert transform, h(t), which is like a 90°
phase shifted trace. Writing the complex trace in polar form, as shown below, gives us the two
basic attributes: the amplitude envelope, A(t) and instantaneous phase, φ(t). (Note that the term
instantaneous amplitude is used synonymously with amplitude envelope.)

C (t ) = s(t ) + ih(t ) = A(t ) exp(iφ (t )) = A(t ) cos φ (t ) + iA(t ) sin φ (t ),


(A1.1)
where : i = − 1, A(t ) = s(t ) 2 + h(t ) 2 and φ (t ) = tan −1 ( h(t ) / s(t )).

Note that if we think of the seismic sample value and its Hilbert transform as a point in attribute
space, the instantaneous amplitude is the length to that point and the instantaneous phase is the
rotation angle. That is, we have transformed from rectangular to polar coordinates. The third
basic instantaneous attribute is the instantaneous frequency, which is the time derivative of the
instantaneous phase, given by the equation:

dφ ( t )
ω (t ) = . (A1.2)
dt

Derivative and Integration Attributes

To understand the effect of differentiating and integrating the seismic trace let us first consider a
one-dimensional Gaussian shape which is given by the formula:

 t2 
g (t ) = A exp − 2  , (A1.3)
 σ 

November 2018
HRS 10.6 Volume Attributes Guide 214

where σ gives the width of the Gaussian and A gives its amplitude. The derivative of this
function is

dg (t ) t  t2 
= −2 A 2 exp − 2  . (A1.4)
dt σ  σ 

These two functions are plotted below and it is clear from this plot that the derivative of the
Gaussian function has an extra zero crossing and thus can be thought of as creating a
discontinuity in the original function. That is, it turns peaks into edges. Another way to think of
this is as a “sharpening” function on a seismic pulse. Since the Gaussian is the integral of its
derivative, integration can be thought of as a smoothing attribute.

Another way to look at this is to consider the effect of differentiation and integration on a single
frequency component given by f (t ) = exp(iω t ) = cos(ω t ) + i sin(ω t ) , where i = − 1 , ω = 2πf
and f = frequency. The derivative of this function is given by

df (t )
= iω exp(iω t ) , (A1.5)
dt

November 2018
HRS 10.6 Volume Attributes Guide 215

and its integral by

−i
∫ f (t )dt = exp(iω t ) + c . (A1.6)
ω

since i can be thought of as a ninety degree phase shift, differentiation increases the frequency
content of the signal and rotates it by 90 degrees, and integration decreases the frequency content
of the signal and rotates it by -90 degrees.

November 2018
HRS 10.6 Volume Attributes Guide 216

Appendix 2 Theory of frequency attributes


(Note: what follows is an edited version of the paper: “Empirical mode decomposition for
seismic time-frequency analysis”, by Jiajun Han and Mirko van der Baan, which appeared in
Geophysics, 78, March-April, 2013).

Introduction

The most common tool for spectral analysis is the Fourier transform, S( ω), of a seismic signal s(t),
which is written in continuous form as

1
∫ s(t )e
−iωt
S (ω ) = dt , (A2-1)
2π −∞

where ω = 2πf , f = frequency and i = − 1 . The inverse Fourier transform is written



1
∫ S (ω )e
iωt
s(t ) = dω . (A2-2)
2π −∞

In equations A2-1 and A2-2 note that the seismic signal s(t) is real but that the Fourier
transform S( ω) is complex and can be written in rectangular form as
S (ω ) = S real (ω ) + iSimag (ω ) , (A2-3)

where Sreal(ω) is the real component of S(ω) and Simag( ω) is the imaginary component of S(ω).
The rectangular form shown in equation A2-3 can be re-written in polar form as
S (ω ) = S (ω ) eϕ (ω ) , (A2-4)

where S (ω ) = the amplitude spectrum, and ϕ (ω ) = the phase spectrum, of s(t ). In their discrete
form, the Fourier transform pair given in equations A2-1 and A2-2 are fundamental to all aspects
of seismic signal processing, from frequency filtering through to seismic migration.

However, if applied to the entire seismic trace, the Fourier transform provides no information
about local frequency variations. Knowledge of how the frequency content of a signal varies in
time would be very useful. Local time-frequency analysis is commonly used in seismic
processing and interpretation, and there is a rich history and diversity in developed
methodologies. As discussed in Appendix 1, Taner et al. (1979) proposed the instantaneous
frequency attribute, which is useful in correlation and appears to indicate hydrocarbon
accumulations. The wavelet transform developed by Morlet et al. (1982) shows more flexibility
and superiority in geophysical applications. Partyka et al. (1999) demonstrated the value of
spectral decomposition in 3D seismic data interpretation using tapered short-time Fourier
transform. Barnes (2000) improves the interpretability of instantaneous attributes by using a
weighted average window. Castagna et al. (2003) demonstrate the suitability of the instantaneous

November 2018
HRS 10.6 Volume Attributes Guide 217

spectrum for hydrocarbon detection. Liu and Marfurt (2007) also use the instantaneous
spectrum for detecting geologic structures.

Most recently, local attributes derived from an inversion-based time-frequency analysis have also
been used in seismic interpretation (Liu et al., 2011). Time-frequency decomposition maps a 1D
time signal into a 2D frequency and time image, which describes how the frequency content
varies with time. The widely used short-time Fourier transform calculates the fast discrete
Fourier transform in each time window to compute the spectrogram. The window length
determines the tradeoff between time and frequency resolution as the decomposition basis of sine
and cosine waves can only provide a fixed spectral resolution (Mallat, 2008).
Downloaded 03/17/13 to 72.52.96.29. Redistribution subject to SEG license or copyright; see Terms of Use at

To overcome the limitations of the short-time Fourier transform, wavelet-based methods have
been applied for seismic time-frequency analysis. Chakraborty and Okaya (1995) compare the
wavelet transform with Fourier- based methods for performing time-frequency analysis on
seismic data, and show the superiority of the wavelet transform in terms of spectral resolution.
Likewise, the S-transform proposed by Stockwell et al. (1996) can be interpreted as a hybrid of
the wavelet transform and short-time Fourier transform. The short-time Fourier, wavelet, and S-
transforms have all been successfully applied to seismic time-frequency analysis; and yet, they
are all inherently limited in terms of time-frequency resolution by their intrinsic choice of
decomposition basis. The computation of instantaneous frequencies seems to offer the highest
possible time-frequency resolution as an individual frequency is obtained at each time sample.
Unfortunately, negative frequencies, which hold uncertain physical interpretation, are not
uncommon (Barnes, 2007).

In this Appendix, we explore using empirical mode decomposition (EMD) in combination with
instantaneous frequencies, which is guaranteed to produce positive values only. The empirical
mode decomposition method (Huang et al., 1998) is a powerful signal analysis technique for
non-stationary and nonlinear systems. EMD decomposes a seismic signal into a sum of intrinsic
oscillatory components, called “intrinsic mode functions” (IMFs). Each IMF has different
frequency components, potentially highlighting different geologic and strati-graphic information.
Furthermore, high-resolution time-frequency analysis is possible by combining EMD with the
instantaneous frequency. The resulting time-frequency resolution promises to be significantly
higher than that obtained using traditional time-frequency analysis tools, such as short-time
Fourier and wavelet transforms.

Empirical mode decomposition methods have progressed from EMD to ensemble empirical
mode decomposition (EEMD) (Wu and Huang, 2009), and a complete ensemble empirical mode
decomposition (CEEMD) has recently been proposed by Torres et al. (2011). Even though EMD
methods offer many promising features for analyzing and processing geophysical data, there
have been few applications in geophysics. Magrin-Chagnolleau and Baraniuk (1999) and Han
and Van der Baan (2011) use EMD to obtain robust seismic attributes. Battista et al. (2007)
exploit EMD to remove cable strum noise in seismic data. Bekara and Van der Baan (2009)
eliminate the first EMD component in the f-x domain to attenuate random and coherent seismic

November 2018
HRS 10.6 Volume Attributes Guide 218

noise. Huang and Milkereit (2009) use the EEMD to analyze the time-frequency distribution of
well logs.

Before discussing EMD, we will discuss both the short time Fourier transform (STFT) and
wavelet transform (WT) methods.

The Short Time Fourier Transform

As the name implies, the Short Time Fourier Transform, or STFT, decomposes the seismic data
into a series of overlapping “short time” windows and performs the Fourier transform on each of
these windows. As shown in Appendix 2, the equation for the STFT can be written


1
∫ s(t )w(t − τ )e
−iωt
STFT (τ , ω ) = dt , (A2.5)
2π −∞

where s(t) is the seismic trace w(t−τ) is a convolutional window, and e-iω t is the Fourier
Transform operator.

The results can be analyzed in various ways, to find the time variant dominant and average
frequency of the data, and to transform the data to various frequency bands. Thus, this option is
similar to the average and dominant frequency attributes in the single trace attributes and also the
filter slices, except that it has more options and also has an option to compute the attributes over
a structural window. This last option is similar to the spectral decomposition method (Partyka et
al., 1999).

November 2018
HRS 10.6 Volume Attributes Guide 219

Figure A1.1 The concept behind the short window Fourier transform. (Partyka et al., 1999).

The Wavelet Transform

(Note: the following excellent discussion was extracted from the documentation for Gwyddion,
which is a free SPM (scanning probe microscopy) data visualization and analysis package.)

The wavelet transform is similar to the Fourier transform (or much more to the windowed
Fourier transform) with a completely different merit function. The main difference is this:
Fourier transform decomposes the signal into sines and cosines, i.e. the functions localized in
Fourier space; in contrary the wavelet transform uses functions that are localized in both the real
and Fourier space. Generally, the wavelet transform can be expressed by the following equation:

F ( a , b) = ∫ f ( x)ψ *
−∞
( a ,b ) dx , (A2.6)

where * is the complex conjugate symbol and function ψ is some function. This function can be
chosen arbitrarily provided that obeys certain rules.

As it is seen, the Wavelet transform is in fact an infinite set of various transforms, depending on
the merit function used for its computation. This is the main reason, why we can hear the term
“wavelet transform” in very different situations and applications. There are also many ways how
to sort the types of the wavelet transforms. Here we show only the division based on the wavelet
orthogonality. We can use orthogonal wavelets for discrete wavelet transform development and

November 2018
HRS 10.6 Volume Attributes Guide 220

non-orthogonal wavelets for continuous wavelet transform development. These two transforms
have the following properties:

1. The discrete wavelet transform returns a data vector of the same length as the input is.
Usually, even in this vector many data are almost zero. This corresponds to the fact that it
decomposes into a set of wavelets (functions) that are orthogonal to its translations and
scaling. Therefore we decompose such a signal to a same or lower number of the wavelet
coefficient spectrum as is the number of signal data points. Such a wavelet spectrum is
very good for signal processing and compression, for example, as we get no redundant
information here.

2. The continuous wavelet transform in contrary returns an array one dimension larger thatn
the input data. For a 1D data we obtain an image of the time-frequency plane. We can
easily see the signal frequencies evolution during the duration of the signal and compare
the spectrum with other signals spectra. As here is used the non-orthogonal set of wavelets,
data are correlated highly, so big redundancy is seen here. This helps to see the results in a
more humane form.

The discrete wavelet transform (DWT) is an implementation of the wavelet transform using a
discrete set of the wavelet scales and translations obeying some defined rules. In other words,
this transform decomposes the signal into mutually orthogonal set of wavelets, which is the main
difference from the continuous wavelet transform (CWT), or its implementation for the discrete
time series sometimes called discrete-time continuous wavelet transform (DT-CWT).

The wavelet can be constructed from a scaling function which describes its scaling properties.
The restriction that the scaling functions must be orthogonal to its discrete translations implies
some mathematical conditions on them which are mentioned everywhere, e.g. the dilation
equation:

φ ( x) = ∑ a φ ( Sx − k ) ,
k = −∞
k (A2.7)

where S is a scaling factor (usually chosen as 2). Moreover, the area between the function must
be normalized and scaling function must be ortogonal to its integer translations, i.e.

∫ φ ( x )φ ( x + l )dx = δ
−∞
0 ,l , (A2.8)

After introducing some more conditions (as the restrictions above does not produce unique
solution) we can obtain results of all these equations, i.e. the finite set of coefficients ak that
define the scaling function and also the wavelet. The wavelet is obtained from the scaling
function as N where N is an even integer. The set of wavelets then forms an orthonormal basis
which we use to decompose the signal. Note that usually only few of the coefficients ak are
nonzero, which simplifies the calculations.

November 2018
HRS 10.6 Volume Attributes Guide 221

Empirical mode decomposition

EMD decomposes a data series into a finite set of signals, called IMFs. The IMFs represent the
different oscillations embedded in the data. They satisfy two conditions: (1) in the whole data
set, the number of extrema, and the number of zero crossings must either equal or differ by one
at most; and (2) at any point, the mean value of the envelope defined by the local maxima and
the envelope defined by the local minimums is zero. These conditions are necessary to en- sure
that each IMF has a localized frequency content by preventing frequency spreading due to
asymmetric waveforms (Huang et al., 1998).

EMD is a fully data-driven separation of a signal into fast and slow oscillation components. The
IMFs are computed recursively, starting with the most oscillatory one. The decomposition
method uses the envelopes defined by the local maxima and the local mini- mums of the data
series. Once the maxima of the original signal are identified, cubic splines are used to interpolate
all the local maxima and construct the upper envelope. The same procedure is used for local
minimums to obtain the lower envelope.

Next, one calculates the average of the upper and lower envelopes and subtracts it from the
initial signal. This interpolation process is continued on the remainder. This sifting process
terminates when the mean envelope is reasonably zero everywhere, and the resultant signal is
designated as the first IMF. The first IMF is subtracted from the data and the difference is treated
as a new signal on which the same sifting procedure is applied to obtain the next IMF. The
decomposition is stopped when the last IMF has a small amplitude or becomes monotonic
(Huang et al., 1998; Bekara and van der Baan, 2009; Han and van der Baan, 2011). The sifting
procedure ensures the first IMFs contain the detailed components of the input signal; the last one
solely describes the signal trend.

Properties that render EMD interesting for seismic signal analysis are:

(1) The decomposition is complete in the sense that summing all IMFs reconstructs the
original input signal and no loss of information is incurred;
(2) IMFs are quasi-orthogonal such that the cross-correlation coefficients between the
different IMFs are always close to zero;
(3) The IMFs have partially overlapping frequency contents differentiating the
decomposition from simple bandpass filters;
(4) No predefined decomposition basis is defined in contrast with Fourier, wavelet, and
S-transforms (Huang et al., 1998; Flandrin et al., 2004; Bekara and Van der Baan, 2009).

Unfortunately, as desirable as the last two properties can be, they may also constitute a major
obstacle restricting the performance of EMD due to intermittency and mode mixing (Huang,
November 2018
HRS 10.6 Volume Attributes Guide 222

1999; Huang et al., 2003). Mode mixing is defined as a single IMF consisting of signals of
widely disparate scales or a signal of a similar scale residing in different IMF components
(Huang and Wu, 2008). Deering and Kaiser (2005) try to use signal masking to solve the mode
mixing problem. However, the masking function is complicated to estimate in real-world
applications. In the next section, we therefore introduce the recently proposed ensemble and
complete ensemble EMD variants designed to prevent mode mixing.

Ensemble empirical mode decomposition

Based on the filter bank structure of EMD (Flandrin et al., 2004, Wu and Huang, 2009) propose
the ensemble EMD to overcome mode mixing. EEMD is a noise-assisted analysis method. It
injects noise into the decomposition algorithm to stabilize its performance. The implementation
procedure for EEMD is simple (Wu and Huang, 2009):

1. Add a fixed percentage of Gaussian white noise onto the target signal.
2. Decompose the resulting signal into IMFs.
3. Repeat steps (1) and (2) several times, using different noise realizations.
4. Obtain the ensemble averages of the corresponding individual IMFs as the final result.

The added Gaussian white noise series are zero mean with a constant flat-frequency spectrum.
Their contribution thus cancels out and does not introduce signal components not already present
in the original data. The ensemble-averaged IMFs maintain therefore their natural dyadic
properties and effectively reduce the chance of mode mixing.

Although EEMD can improve EMD performance, it does leave another question: Is it a complete
decomposition? Does the sum of all resulting IMFs reconstruct the original signal exactly?
Unfortunately, by design, each individual noise-injected EMD application can produce a
different number of IMFs. Summing the ensemble- averaged IMFs does not perfectly recreate
the original signal, although the reconstruction error decreases with increasing number of
employed noise realizations at the expense of increasing computation times.

Complete ensemble empirical mode decomposition

Complete ensemble empirical mode decomposition is also a noise-assisted method. The


procedure of CEEMD can be described as follows (Torres et al., 2011):

November 2018
HRS 10.6 Volume Attributes Guide 223

First, add a fixed percentage of Gaussian white noise onto the target signal, and obtain the first
EMD component of the data with noise. Repeat the decomposition I times using different noise
realizations and compute the ensemble average to define it as the first IMF of the target signal:

1 I
IMF1 = ∑ E1 [x + εwi ],
I i =1
(A2.9)

where IMF1 is the first EMD component of the target signal x, wi is zero-mean Gaussian white
noise with unit variance, ε is a fixed coefficient, Ei[ ] produces the ith IMF component and I is the
number of realizations. Then calculate the first signal residue r1, where

r1 = x − IMF1 (A2.10)

Next decompose realizations r1 + ε E1 [wi ], i = 1,2,, I , until they reach their first IMF conditions
and define the ensemble average as the second IMF2 :

1 i
IMF2 = ∑ E1 [r1 + ε E1[ wi ]] .
I i =1
(A2.11)

For k = 2,3,, K calculate the kth residue rk = rk −1 − IMFk , then extract the first IMF component
of rk + ε Ek [wi ] , i = 1,2,, I and compute again their ensemble average to obtain IMFk +1 of the
target signal:

1 i
IMFk = ∑ E1 [rk + ε Ek [ wi ]] .
I i =1
(A2.12)

The sifting process is continued until the last residue does not have more than two extrema,
producing
K
R = x − ∑ IMFk , (A2.13)
k =1

where R is the final residual, and K is the total number of IMFs. The target signal can therefore
be expressed as:
K
x = ∑ IMFk + R . (A2.14)
k =1

November 2018
HRS 10.6 Volume Attributes Guide 224

Equation (A2.14) makes CEEMD a complete decomposition method (Torres et al., 2011).
Compared with both EMD and EEMD, CEEMD not only solves the mode mixing predicament,
but also provides an exact reconstruction of the original signal. Therefore, it is more suitable than
EMD or EEMD to analyze seismic signals.

Instantaneous frequency

The local symmetry property of the IMFs ensures that instantaneous frequencies are always
positive, thereby rendering EMD or its variants interesting for time-frequency analysis (Huang et
al., 1998). Seismic instantaneous attributes (Taner et al., 1979) are derived from the seismic trace
x(t) and its Hilbert transform y(t) by computing its analytic signal, given by,

z (t ) = x (t ) + iy (t ) = R(t ) exp[iθ (t )] . (A2.15)

where R(t) and θ (t) denote the instantaneous amplitude and instantaneous phase, respectively.
Instantaneous amplitude is the trace envelope, also called reflection strength, defined as

R (t ) = x (t ) 2 + y (t ) 2 . (A2.16)

Instantaneous frequency f(t) is defined as the first derivative of instantaneous phase, or

1 dθ ( t )
f (t ) = . (A2.17)
2π dt

In order to prevent ambiguities due to phase unwrapping in equation (A2.17), the instantaneous
frequency can be calculated instead from

dy (t ) dx (t )
x (t ) − y (t )
1 dt dt .
f (t ) = (A2.18)
2π x (t ) 2 + y (t ) 2

November 2018
HRS 10.6 Volume Attributes Guide 225

We use equations (A2.16) and (A2.18) to compute instantaneous amplitudes and frequencies for
each IMF. Contrary to classical application of instantaneous attributes to the original signal, this
procedure produces a multitude of instantaneous frequencies at each time sample, namely one for
each IMF, allowing for a more in-depth signal analysis. The result is a time-frequency
distribution that is uniformly sampled in time but not in frequency, contrary to for instance the
short time Fourier transform. There are as many instantaneous frequencies as IMFs, but most
applications produce up to a dozen IMFs, creating very sparse time-frequency representations.

We also compute the peak frequency of the various IMFs and other decomposition methods to
create a single attribute. It is defined as the frequency where the maximum energy in each time
sample occurs. Peak frequency extraction is a useful kind of spectral decomposition technique
which it has been widely applied in signal processing research (Marfurt and Kirlin, 2001;
Boashash and Mesbah, 2004).

This attribute has the advantage that it produces a single image convenient for interpretation
purposes. Further analysis using the individual frequency slices remains always feasible. In a
similar fashion, Marfurt and Kirlin (2001) introduce a mean-frequency attribute as a way to
summarize the information contained in a spectral decomposition.

Data Examples for Synthetic data: EMD, EEMD and CEEMD

In this section, we first compare the various EMD-based methods using synthetic signals to
demonstrate the advantages of CEEMD. Then, we show that instantaneous spectral analysis after
CEEMD has higher time-frequency resolution than traditional tools, like the short time Fourier
and wavelet transforms.

The signal in Figure A2.1 is comprised of an initial 20 Hz cosine wave, superposed 100 Hz
Morlet atom at 0.3s, two 30 Hz Ricker wavelets at 1.07s and 1.1s, and three different frequency
components between 1.3s and 1.7s of respectively 7, 30 and 40Hz. Noted that the 7 Hz frequency
components are not continuous, comprise less than one-period portions, appearing at 1.37s, 1.51s
and 1.65s.

EMD decomposes the synthetic data into 7 IMFs (Figure A2.2). The IMFs in Figure A2.2 show
mode mixing deficiencies. IMF1 does not solely extract the high frequency Morlet atom, but is
polluted with low frequency components. Likewise IMF2 and IMF3 mix low and high-frequency
components from a variety of signal components. This makes it difficult to recognize the
individual contributions of each component to various IMFs, thereby complicating signal
analysis.

November 2018
HRS 10.6 Volume Attributes Guide 226

Figure A2.1. Synthetic example: background 20 Hz cosine wave,

superposed 100 Hz Morlet atom at 0.3 s, two 30 Hz Ricker wavelets

at 1.07 and 1.1 s, and there are three different frequency components

between 1.3 and 1.7 s.

November 2018
HRS 10.6 Volume Attributes Guide 227

Figure A2.2. EMD output displaying mode mixing. IMF1 extracts the

high-frequency Morlet atom and some low-frequency components.

IMF2 and IMF3 also mix different signal components.

Figure A2.3 contains the EEMD output with 10% added Gaussian white noise and 100
realizations. The mode mixing problem is reduced to a large extent; for instance, the 100 Hz
Morlet atom is completely retrieved in IMF1. IMF 2 mainly contains the 40 Hz signal, which is
the second highest frequency component. Some slight mode mixing still occurs in IMF3 and
IMF4, but at a significantly reduced level compared with the EMD output.

The CEEMD result also using 10% Gaussian white noise and 100 realizations is shown in Figure
A2.4. The resulting IMF1 is similar to the one obtained by EEMD, retrieving the 100 Hz Morlet
atom completely. The resulting IMF2 and IMF3 contain mostly the 40 Hz signal at 1.6s as well
as some other higher frequency components, and IMF4 reflects the two 30 Hz Ricker wavelets
around 1.1s, 30 Hz frequency component at 1.4s and the remainder of the 40 Hz signal at 1.6s.

November 2018
HRS 10.6 Volume Attributes Guide 228

The background 20 Hz cosine wave is mainly reflected in IMF5. CEEMD is least affected by
mode mixing of all EMD variants.

Figure A2.5 displays the reconstruction error for both EEMD and CEEMD results. EEMD does
not perfectly reproduce the original signal with a reconstruction error of about 0.5% of the total
energy; the CEEMD one is close to machine precision and thus negligible.

Figure A2.3. EEMD output with 10% added Gaussian white noise and 100 realizations. Although some mode mixing still occurs
in IMF3 and IMF4, the mode mixing problem is reduced to a large extent compared with the EMD output (Figure A2.2)

November 2018
HRS 10.6 Volume Attributes Guide 229

Figure A2. 4. CEEMD output with 10% added Gaussian white noise and 100 realizations. The output is least affected by mode
mixing of all EMD variants (compare with Figures A2.2 and A2.3).

Figure A2.5. Reconstruction error for EEMD and CEEMD results. EEMD can lead to non-negligible
reconstruction error, whereas it is close to machine precision for CEEMD.

November 2018
HRS 10.6 Volume Attributes Guide 230

Synthetic data: Instantaneous frequencies

After the CEEMD decomposition, each IMF is locally symmetric, such that the instantaneous
frequency of each IMF is smoothly varying and guaranteed to be positive. We compute the
instantaneous frequency of each IMF using equation A2.18 and associated instantaneous
amplitude with equation A2.16. It is possible to smooth the resulting time-frequency image by
means of a convolution with a 2D Gaussian filter of pre-specified width. This is useful for both
display purposes and initial comparison with other time-frequency transforms. Next, we compare
the resulting instantaneous spectrum with the Short-time Fourier and wavelet transforms for the
same synthetic trace shown in Figure A2.1.

All three methods can discriminate the various frequency components between 1.2s and 2s,
namely the 7, 30 and 40 Hz signals, with acceptable temporal and spectral resolution. None of
these three methods can identify the individual portions of the three 7 Hz frequency components,
but solely their joint presence. The short-time Fourier transform with a 170ms time window
(Figure A2.6) does not distinguish between the two Ricker wavelets clearly at 1.07s and 1.1s due
to its fixed time-frequency resolution and their close spacing of 30ms. Wavelet analysis (Figure
A2.7) fares better; however, the spectral resolution for the 100 Hz Morlet wavelet at 0.3s is poor.

Figure A2.8 displays the instantaneous spectrum after CEEMD. The 100 Hz Morlet wavelet,
both 30 Hz Ricker wavelets, and three different frequency components are recovered with the
highest time-frequency resolution. A small Gaussian weighted filter with width of 6*6 time and
frequency samples is applied to the instantaneous spectrum for display purposes.

After calculating the instantaneous frequency, we can control the time-frequency resolution by
varying the size of Gaussian weighted filter. Figure A2.9 shows the resulting instantaneous
spectrum using a 30*30 Gaussian weighted filter, creating a result more comparable to the short-
time Fourier and wavelet transforms (Figure A2.6 and A2.7).

This synthetic example shows the potentially significantly higher time-frequency resolution of
CEEMD combined with instantaneous frequencies over that obtainable with the short time
Fourier and wavelet transforms.

November 2018
HRS 10.6 Volume Attributes Guide 231

Figure A2.6. STFT with a 170 ms time window. It cannot distinguish between the two Ricker wavelets at
1.07 and 1.1 s due to its fixed time-frequency resolution.

Figure A2.7. Wavelet transform analysis, which shows a better compromise between time and frequency
resolution than the short-time Fourier transform as it distinguishes both Ricker wavelets at 1.1 s. Yet, the
frequency resolution for the 100 Hz Morlet wavelet at 0.3 s is poor.

November 2018
HRS 10.6 Volume Attributes Guide 232

Figure A2.8. Instantaneous spectrum after CEEMD has the highest time-frequency resolution and identifies
all individual components. A 6 × 6 Gaussian weighted filter is applied for display purposes.

Figure A2.9. Instantaneous spectrum after CEEMD and a 30 × 30 Gaussian-weighted filter to make the
result more comparable to Figures A2.6 and A2.7.

November 2018
HRS 10.6 Volume Attributes Guide 233

Real data

Next we apply the various time-frequency analysis tools on a seismic dataset from a sedimentary
basin in Canada, shown in Figure A2.10. There are Cretaceous meandering channels at 0.42s
between CMPs 75-105 and CMPs 160-180, respectively. An erosional surface is located between
CMPs 35-50 around 0.4s. The data also contain evidence of migration artifacts (smiles) at the
left edge between 0.1s and 0.6s. Note that van der Baan et al. (2010) have used cumulative
energy and local phase attributes to interpret the same data.

Figure A2.10. Seismic data set from a sedimentary basin in Canada. The erosional surface and channels are
highlighted by arrows.

First, we take the trace for CMP 81 (Figure A2.11) as an example to show the time-frequency
distributions corresponding to the various transforms. The results for the short time Fourier
transform with a 50ms time window and the wavelet transform are shown in Figures A2.12 and
A2.13, respectively. Both tools show that there are essentially two frequency bands, a lower one
between 10-50 Hz persistent at all times, and an upper one that diminishes over time (90 Hz at
0.1s, 70 Hz at 0.5s and 50 Hz at 1s). The reduction in the high-frequency band is most likely due
to attenuation of the seismic wavelet.

November 2018
HRS 10.6 Volume Attributes Guide 234

Figure A2.11. Individual trace of CMP 81 in Figure A2.10. It crossed the channel at 0.42 s.

Figure A2.12. Short-time Fourier transform with a 50 ms time window

on CMP 81. The strong 35 Hz anomaly at 0.42 s is due to the

channel.

November 2018
HRS 10.6 Volume Attributes Guide 235

Figure A2.13. Wavelet analysis on CMP 81. Vertical stripes at higher

frequencies are due to an increased time resolution but poorer

frequency resolution. High-frequency content is diminishing over time.

Instantaneous spectral analysis combined with CEEMD with 10% added Gaussian white noise
using 50 realizations (Figure A2.14) provides a much sparser image. It reflects a similar time
frequency distribution as the two traditional tools with both the persistent lower frequency band
as well as the diminishing upper band visible. The sparser image is helpful for more accurately
locating these spectral anomalies, thus facilitating further interpretation.

November 2018
HRS 10.6 Volume Attributes Guide 236

Figure 2.14. Instantaneous spectrum after CEEMD on CMP 81,

displaying the highest time-frequency resolution. Similar features

are visible as in Figures A2.12 and A2.13 including the channel at

0.42 s and the diminishing high-frequency content over time.

Next, we pick the peak frequency at each time sample and overlay it onto the original seismic
data. Figure 2.15 shows the peak frequency after short time Fourier Transform. This image
shows smooth and continuous features, including alternately high and low frequency bands
between 0.2 and 0.8s due to variations in reflector spacing, and a general decrease in high
frequencies, which is associated with attenuation of the seismic wavelet.

November 2018
HRS 10.6 Volume Attributes Guide 237

Fig 2.15 Fig 2.16

Figure A2.15 delineates several interesting features in this dataset. First, the peak frequency
attribute highlights the Cretaceous meandering channels at 0.42s, which are characterized by
lower frequency content due to their increased thickness. Second, it indicates the weakening of
the closely spaced reflections (thin layers) around 0.8s. High peak frequencies are clearly visible
between CMPs 0-75, followed by predominantly low frequencies due to the thick homogeneous
layer underneath. A comparison with the original section (Figure A2.10) shows indeed a
reduction in the number of closely spaced reflections from the left to the right around 0.8s,
although the migration artifacts visible at the left edge may also influence the high-frequency
region to some extent.

As first sight the CEEMD-based peak frequencies seem to be noisier (Figure 2.16). However, the
image contains more fine detail compared with the short-time Fourier result (Figure A2.15).
Both images delineate the Cretaceous meandering channels around 0.42 s. Also the thin-layer
reflection at 0.80 s is more clearly followed without the abrupt transition to a low-frequency
layer at CMP 75 due to the influence of the underlying thick opaque layer. This is a direct result
of the higher time resolution of CEEMD combined with computation of instantaneous
frequencies. On the other hand, initial inspection of the smoother results for the short-time
Fourier transform facilitates interpretation of the CEEMD results.

Next, we extract the 30 Hz and 50 Hz frequency slices after CEEMD and short time Fourier
transforms (Figure A2.17) to illustrate the higher time-frequency resolution of the CEEMD-
based results. The instantaneous spectrum shows much sparser outputs and resolves the spectral
characteristics of the various reflections more clearly than the short time Fourier results. This
also explains why the Fourier-based peak-frequency attribute is more continuous than the
CEEMD-based result in Figures A2.15 and A2.16.

November 2018
HRS 10.6 Volume Attributes Guide 238

Fig 2.17
Figure 17. Constant-frequency slices. (a) 30-Hz CEEMD-based method, (b) 30-Hz short-time Fourier transform, (c) 50-Hz
CEEMD-based method, (d) 50-Hz short-time Fourier transfrom. The instantaneous spectrum combined with CEEMD shows
higher time-frequency resolution than the short-time Fourier transform.

Finally, we perform a spectral decomposition of a 3D seismic data volume using both


approaches. Figure A2.18 shows a time slice at 420ms displaying both the channel feature, as
well as a subtle fault. CEEMD employs again 10% added Gaussian white noise and 50
realizations. A window length of 150ms (75 points) is used for the short time Fourier transform,
producing a frequency step of 7 Hz in the spectral decomposition.

Figure A2.18. The conventional amplitude slice at time 420 ms. The

channel feature is clearly shown.

November 2018
HRS 10.6 Volume Attributes Guide 239

Figures A2.19a and A2.19c show respectively the 10 Hz and 30 Hz spectral slices for the
instantaneous spectrum after CEEMD at 420ms. Both the channel and fault are visible,
especially at 30Hz. Both spectral slices show similar features; yet there are also clear differences,
in particular in the amplitudes of the channel, indicating little spectral leakage across these two
frequencies. These amplitude differences are helpful in interpreting thickness variations.

The 10 and 30 Hz spectral slices produced by Fourier analysis also show the fault and channel
features (Figures A2.19b and A2.19d). However, there are significantly less amplitude variations
across both slices as unique frequencies are spaced 7 Hz apart due to the short window length
and the spectral leakage inherent to the Fourier transform. This renders interpretation of
thickness variations in the channel much more challenging as thinning or thickening by a factor
two may still produce the same amplitudes across several spectral slices centered on the expected
peak frequency. We could have opted for a longer Fourier analysis window, thereby reducing the
frequency step in the amplitude spectra. On the other hand, this increases the risk of neighboring
reflections negatively biasing the decomposition results. No local analysis window is defined for
the CEEMD method thus circumventing this trade off.

Fig 2.19
Figure 19. Comparison of time slices: (a) 10 Hz of CEEMD-based method; (b) 10 Hz of short-time Fourier transform; (c) 30 Hz
of CEEMD-based method; (d) 30 Hz of shorttime Fourier transform. CEEMD-based method highlights the geologic features
more clearly and facilitates the interpretation of thickness variation.

November 2018
HRS 10.6 Volume Attributes Guide 240

Discussion

Instantaneous frequency can be used to detect and map meandering channels and to determine
their thickness (Liu and Marfurt, 2006) as it maps at what frequency maximum constructive
interference occurs between the top and bottom channel reflection. However, direct calculation
can lead to instantaneous frequencies, which fluctuate rapidly with spatial and temporal location
(Barnes, 2007; Han and van der Baan, 2011). Saha (1987) discusses the relationship between
instantaneous frequency and Fourier frequency, and points out that the instantaneous frequency
measured at an envelope peak approximates the average Fourier spectral frequency weighted by
the amplitude spectrum. Huang et al. (2009) summarize the applicability conditions for
instantaneous frequency: namely, the time series must be mono-component and narrow band.
Analysis of instantaneous frequencies has been gradually replaced by spectral decomposition
techniques in the 1990s due to their increased flexibility (Chakraborty and Okaya, 1995; Partyka
et al., 1999).

CEEMD successfully overcomes the mode-mixing problem, thus facilitating the analysis of
individual IMFs. The subsequent computation of the instantaneous frequency then leads to
relatively smoothly varying and positive instantaneous frequencies suitable for time-frequency
analysis. In addition, both the synthetic and real data examples show this produces a potentially
higher time-frequency resolution than the short time Fourier and wavelet transforms. Window
length, overlap and mother wavelet parameters restrict the resolution of short time Fourier and
wavelet transforms, and predefined decomposition bases render these two methods less suitable
for analyzing non-stationary systems.

The computational cost of CEEMD is proportional to the number of realizations. We use 50


realizations in the real data application to balance computational cost versus satisfactory
decomposition results. Broadly speaking we found in our tests that the computational cost of a
wavelet transform and CEEMD using 50 realizations are respectively twice and 18 times that of
a short-time Fourier transform. A single EMD decomposition can thus be faster than a single
STFT result. Obviously these computation times strongly depend on the implementation and
actual parameter settings, yet applications of EMD and variants are not prohibitively expensive.
The actual time-frequency resolution of any EMD variant in combination with computation of
the instantaneous frequency is to the best of our knowledge still unknown. The uncertainty
principle states that it is impossible to achieve simultaneously high time and frequency
resolution, as their product is always greater than or equal to a constant. In the short time Fourier
transform, the window length causes the tradeoff between time and frequency resolution. Large
time windows achieve good frequency resolution at the cost of high time resolution, and vice
versa. Conversely wavelet and S-transforms display an inherent trade-off between time and
frequency resolution via their variable-size analysis windows (Rioul and Vetterli, 1991; Kumar
and Foufoula-Georgiou, 1997).

The instantaneous frequency calculates a frequency value at every time sample, producing the
highest possible time resolution but with necessarily very poor frequency resolution. This

November 2018
HRS 10.6 Volume Attributes Guide 241

provides an alternative insight into why negative frequency values are not uncommon. However,
instantaneous frequency is not meaningless as the instantaneous frequency measured at an
envelope peak approximates the weighted average Fourier spectral frequency, and shows
superior results on mono-component and narrow band signals (Saha, 1987; Huang et al., 2009).

Flandrin et al. (2004) show that EMD acts as a constant-Q bandpass filter for white-noise time
series. In other words, white noise is divided into IMF components each comprising
approximately a single octave. Results by Torres et al. (2011) imply that CEEMD maintains this
property. Given the uncertainty principle, we postulate therefore that the inherent frequency
resolution of each individual IMF is one octave with a time resolution inversely proportional to
the center frequency of this octave. The obtained IMFs have thus an increasing frequency
resolution at the expense of a decreasing time resolution with increasing IMF number. In other
words, the first IMF has thus the highest time resolution and the lowest frequency resolution.
The opposite is true for the last IMF. Furthermore this implies that temporal fluctuations in the
instantaneous frequencies are limited to approximately the reciprocal of the center frequency of
the corresponding octave, or to put it differently, all computed instantaneous frequencies are
guaranteed to be relatively smooth within their various scale lengths.

The preceding discussion assumes a white-noise signal. For arbitrary signals the performance of
CEEMD in combination with instantaneous attributes may retrieve even more accurate and
precise time-frequency decompositions if the original trace is comprised of individual mono-
component and narrowband signals as the sifting algorithm is designed to extract individual
IMFs with precisely such characteristics. One important assumption in EMD and its variants is
that the observed signal is comprised of narrowband sub-signals. The instantaneous frequency of
each IMF then accurately captures their characteristics. On the other hand, if sub-signals have
some bandwidth, such as a Ricker wavelet or Morlet wavelet, the described method will seek to
collapse this to a single frequency. This could be interpreted as super-resolution as it may help in
analyzing subtle variations but simultaneously it may hide the true bandwidth of this sub-signal.
In addition this may not be suitable for all signal analysis, e.g., if an attenuation analysis is
required using spectral ratios (Reine et al., 2009, 2012a, 2012b).

Finally, the main advantages of CEEMD combined with instantaneous frequencies are the ease
of implementary and controllable time-frequency resolution. There are only two parameters in
CEEMD, namely the percentage of Gaussian white noise and the number of noise-realizations.

Neither seems to have a critical influence on final decompositions. Furthermore we can control
the time-frequency resolution by the size of the Gaussian weighted filter. Smaller sizes show
higher temporal-spectral resolution, and vice versa. It is therefore possible to compute first a
decomposition result similar to those of the short-time Fourier and wavelet transforms which can
then be reduced for further and more precise analysis, thus allowing for seismic interpretation
with controllable time-frequency resolution.

November 2018
HRS 10.6 Volume Attributes Guide 242

The real data example verifies that instantaneous spectrum after CEEMD have higher time-
frequency resolution than traditional decompositions. However, the associated peak-frequency
attribute may therefore vary more rapidly both spatially and temporally, rendering the
interpretation more challenging. Our recommendation is to analyze the principal frequency
variations by short time Fourier transform or severely smoothed CEEMD-based instantaneous
frequencies first, followed by identification of the subtle changes in geology using the
unsmoothed instantaneous spectrum.

Conclusion

CEEMD is a robust extension of EMD methods. It solves not only the mode mixing problem, but
also leads to complete signal reconstructions. After CEEMD, instantaneous frequency spectra
manifest visibly higher time-frequency resolution than short time Fourier and wavelet transforms
on both synthetic and field data examples. These characteristics render the technique highly
promising for both seismic processing and interpretation.

November 2018
HRS 10.6 Volume Attributes Guide 243

Appendix 3 Edge Preserving Filters


A good discussion of edge preserving figures is found in Al-Dossary and Marfurt (2006). The
authors start by describing smoothing filters, which are implemented averaging the traces in a
running n x n trace analysis, where n is typically equal to 3 or 5. The if we let J = n2, we can
label each of the traces as dj, where j = 1, 2, … , J. The simplest is the mean filter, given as:

1 J
d mean = ∑d j .
J j =1
(A3-1)

The mean filter is very effective if there are no large outliers in the data but will be biased to the
outliers if the data contains them. The median filter is better at rejecting outliers and is found by
first arranging the samples from smallest to largest, or d j (1) ≤ d j ( 2 )  ≤ d j ( k ) ≤ d j ( k +1)  ≤ d j ( J ) ,
and then selecting the central value as the median:

d median = d  J +1 
. (A3-2)
j k = 
 2 

However, the median filter is less effective than the mean filter at rejecting random noise. The
alpha-trimmed mean combines the best aspects of mean and median filters and is found by first
rejecting a percentage α of the data on each side of the size-arranged data values and then taking
the mean of the rest. The formula is given by

(1 − α ) J
1
dα =
(1 − 2α ) J
∑d
j =αJ +1
j(k ) . (A3-3)

Note that 0 ≤ α < 0.5 , when α is equal to 0 it becomes the mean filter and when it equals 0.5 we
replace equation A3-3 with equation A3-2.

November 2018
HRS 10.6 Volume Attributes Guide 244

Appendix 4 Anisotropic Diffusion Filter


In this appendix we discuss the theory and process of anisotropic diffusion filter. The main aim
of anisotropic diffusion is to smooth the data without muffling edges, and therefore remove noise
and small-scale geological features to emphasize the main geological and structural features of
an area (Fehmers and Höcker, 2003). This is done by a diffusion procedure where the diffusion
tensor is computed from the local structure (i.e. parallel to the reflections). The process is
iterative, gradually simplifying the data to reveal overall structure. You can determine the major
features and then return to less filtered versions to discover more detail.

Concept

The term “diffusion” was used because of the similarity to diffusion in a medium, which
balances the values of an attribute (such as temperature or solution concentration). In this
method, seismic amplitudes are “balanced” in a volume.

Therefore, the gradient caused by the amplitude differences in the seismic data causes a “flux”
from the high amplitudes to the low amplitudes. This “flux vector” is:

Fick’s Law: j = -D∇U,


𝜕𝜕𝜕𝜕
⎡𝜕𝜕𝜕𝜕 ⎤
⎢𝜕𝜕𝜕𝜕⎥
where ∇U is the gradient of seismic amplitude: ⎢𝜕𝜕𝜕𝜕⎥ and D is the Diffusion Tensor.
⎢𝜕𝜕𝜕𝜕⎥
⎣𝜕𝜕𝜕𝜕 ⎦

Note that the diffusion process neither increases or decreases seismic amplitude. By using a
tensor instead of a scalar, we can include anisotropy (in this case, diffusion (i.e. smoothing) can
be inhibited in some directions). D is constructed so that its eigenvectors follow the local
structure (so that D = D(U)).

However, if the diffusion is not controlled, it will remove discontinuities such as faults.
Therefore, a continuity factor, ε, is included, where 0 ≤ ε ≤ 1. At a fault, ε approaches 0, and
where diffusion is not hindered, ε approaches 1.

To average the orientation of the structure, we use the Structural Tensor S

where S = ∇U(∇U)T. The T represents the transpose. Sσ is the structural tensor at the scale of σ.

November 2018
HRS 10.6 Volume Attributes Guide 245

The basic Iterative Anisotropic Equation, for the Time domain:

𝑈𝑈𝑛𝑛+1 = 𝑈𝑈𝑛𝑛 + ∆𝑡𝑡∇ ∙ (ε𝑛𝑛 𝐷𝐷𝑛𝑛 ∇𝑈𝑈𝑛𝑛 ) (A4-1)

where Un and Un+1 are the current and the updated sample values,

Δt is the time-step (increment) value,

εn is the Continuity factor, also called the edge detecting or fault preserving factor,

Dn is the Anisotropic Diffusion Tensor, a dimensionless value,

and ∇Un is the Gradient in x, y and t values for the current sample.

Process

1. The sample values are measured from map slices taken from a data volume. The gradients are
measured in the x, y and z (i.e. time) directions for each sample in the slice (see AVO Gradient
Theory). These results are used to create three map slice vector objects.

2. Each slice gives a vector object, Un .

3. Compute the gradient, ∇U for each sample for the x, y and z directions, where z is the time
domain and x and y are the azimuthal directions. This produces three map slice vector objects:
Ux, Uy and Uz. These values will be later used in two places.

4. Compute the outer product of the gradient, ∇U with itself: ∇U∇Ut. This gives a 3x3 matrix
with 6 independent entries:

UxUx, UxUy, UxUz, UyUy , UyUz, UzUz, which 𝑈𝑈𝑥𝑥 𝑈𝑈𝑥𝑥 𝑈𝑈𝑥𝑥 𝑈𝑈𝑦𝑦 𝑈𝑈𝑥𝑥 𝑈𝑈𝑧𝑧
form the base S structural tensor: �𝑈𝑈𝑦𝑦 𝑈𝑈𝑥𝑥 𝑈𝑈𝑦𝑦 𝑈𝑈𝑦𝑦 𝑈𝑈𝑦𝑦 𝑈𝑈𝑧𝑧 �This is the S matrix.
𝑈𝑈𝑧𝑧 𝑈𝑈𝑥𝑥 𝑈𝑈𝑧𝑧 𝑈𝑈𝑦𝑦 𝑈𝑈𝑧𝑧 𝑈𝑈𝑧𝑧

Note that the matrix is diagonally


symmetrical and the bottom six elements are
independent:

Each of these six volumes can be smoothed at two different scales in order to:

November 2018
HRS 10.6 Volume Attributes Guide 246

• Produce an edge/fault detecting and preserving filter. This edge protecting filter is optional.

• Smooth the volume at a small scale factor to be used for the anisotropic diffusion matrix.

This will result in 12 data volumes of attributes to be filtered.

Therefore, six volumes are filtered with the σ scale smoother. They compose the independent
elements of the Sσ Tensor (aka the Structural Tensor at scale σ).

Six more volumes can be filtered with the ρ scale smoother. They compose the independent
elements of the Sρ Tensor (aka the Structural Tensor at scale ρ).

Gaussian Filter Details


5. Apply a 3D Gaussian smoother (Geng et al, 2013):
1 𝑥𝑥 2 +𝑦𝑦 2 +𝑧𝑧 2
𝐺𝐺𝐺𝐺 = 3 𝑒𝑒𝑒𝑒𝑒𝑒 �− � (A4-2)
�(2𝜋𝜋) �2�𝜎𝜎 3 2𝜎𝜎 2

where σ is the scale (i.e. distance) selected for smoothing.

The values for σ and ρ must be selected. We recommend that ρ>σ and range from 1 to 9. The
bin size spacing is often used.

The Gaussian Filtering is done in the time domain with a recursive Gaussian filtering algorithm.
To avoid memory exhaustion when processing large volumes, we use sub volumes of the map
slices. To prevent apparent edge effects, a parameter will specify extra slices as padding at the
beginning and ending of the subsection. We recommend a value of 1 or 2 for the padding.

The Anisotropic Diffusion Tensor


6. For each sample:

a. Reconstruct the Sσ matrix from the six independent attributes.

b. Compute the Eigen values and Eigen vectors

c. Sort the Eigen values in descending order.

d. Discard the most significant Eigen value and its associated Eigen vector.

e. Set the second and third Eigen values to unity.

f. Compute the outer product of each remaining Eigen Vector, and sum the products to
�.
create the Anisotropic Diffusion Tensor, 𝐷𝐷

November 2018
HRS 10.6 Volume Attributes Guide 247

The Anisotropic Diffusion Filter, if not controlled, will smooth through edges and terminations
such as faults. Therefore we can use a Continuity Factor, ε, that detects and preserves these
features. (Fehmers and Höcker, 2003).

Continuity Factor (optional)


7. For each sample:

a. Compute the tensor multiplication product of 𝑆𝑆 𝜎𝜎 𝑎𝑎𝑎𝑎𝑎𝑎 𝑆𝑆𝜌𝜌 . Tr(SσSρ)


Compute the trace of this product; this is defined as the sum of
the diagonal elements of this matrix:

b. Compute the traces of the Sσ and Sρ tensors. Compute the Tr(Sσ)Tr(Sρ)


product of these two values; this is defined as the sum of the
diagonal elements of this matrix:

c. Compute the Continuity Factor, ε, from these results: 𝑇𝑇𝑇𝑇 (𝑆𝑆 𝜎𝜎 𝑆𝑆𝜌𝜌 )
𝜀𝜀 =
𝑇𝑇𝑇𝑇 (𝑆𝑆 𝜎𝜎 )𝑇𝑇𝑇𝑇(𝑆𝑆𝜌𝜌 )

8. Construct the Forward Differencing Scheme kernel using the Anisotropic Diffusion Filter”

� ∇𝑈𝑈 for this time


a. Compute the product of 𝜀𝜀𝐷𝐷 𝐷𝐷11 𝐷𝐷12 𝐷𝐷13 𝑈𝑈𝑥𝑥
step and sample. Note that ∇U has been stored �
𝜀𝜀𝐷𝐷 ∇𝑈𝑈 = 𝜀𝜀 �𝐷𝐷21 𝐷𝐷22 𝐷𝐷23 � �𝑈𝑈𝑦𝑦 �
in three data volumes as map-slice vectors. 𝐷𝐷31 𝐷𝐷32 𝐷𝐷33 𝑈𝑈𝑧𝑧

b. The result is three 3x1 vectors, stored in three 𝜀𝜀�𝑈𝑈𝑥𝑥 𝐷𝐷11 + 𝑈𝑈𝑦𝑦 𝐷𝐷12 + 𝑈𝑈𝑧𝑧 𝐷𝐷13 �𝑥𝑥�
new volumes. We need these volumes because
the divergence, ∇⋅, must be computed to be used 𝜀𝜀�𝑈𝑈𝑥𝑥 𝐷𝐷21 + 𝑈𝑈𝑦𝑦 𝐷𝐷22 + 𝑈𝑈𝑧𝑧 𝐷𝐷23 �𝑦𝑦�
in the Forward Differencing Scheme.
𝜀𝜀�𝑈𝑈𝑥𝑥 𝐷𝐷31 + 𝑈𝑈𝑦𝑦 𝐷𝐷32 + 𝑈𝑈𝑧𝑧 𝐷𝐷33 �𝑧𝑧̂

c. For each sample, compute the divergence of the kernel to create a scalar variable:

𝑑𝑑 𝑑𝑑
(∇ ∙ ) = �𝜀𝜀�𝑈𝑈𝑥𝑥 𝐷𝐷11 + 𝑈𝑈𝑦𝑦 𝐷𝐷12 + 𝑈𝑈𝑧𝑧 𝐷𝐷13 �� + �𝜀𝜀�𝑈𝑈𝑥𝑥 𝐷𝐷21 + 𝑈𝑈𝑦𝑦 𝐷𝐷22 + 𝑈𝑈𝑧𝑧 𝐷𝐷23 ��
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑
𝑑𝑑
+ �𝜀𝜀�𝑈𝑈𝑥𝑥 𝐷𝐷31 + 𝑈𝑈𝑦𝑦 𝐷𝐷32 + 𝑈𝑈𝑧𝑧 𝐷𝐷33 ��
𝑑𝑑𝑑𝑑

November 2018
HRS 10.6 Volume Attributes Guide 248

9. Update the starting data volume of this iteration with an updated volume, based on these
calculations. Chose a small time step ∇t, 0.25 ≤∇t ≤ 0.5. This is not a critically sensitive
parameter.

𝑈𝑈𝑛𝑛+1 = 𝑈𝑈𝑛𝑛 + ∆𝑡𝑡∇ ∙ (𝜀𝜀𝑛𝑛 𝐷𝐷𝑛𝑛 ∇𝑈𝑈𝑛𝑛 ) (A4-3)

Note that this requires up to 34 intermediary data volumes and makes heavy use of the memory
and disk space of the computer.

Figure A4.1. Workflow of anisotropic diffusion filter.

November 2018
HRS 10.6 Volume Attributes Guide 249

Appendix 5 Coherency and Energy Ratio

In this Appendix, we will discuss the following geophysical applications:

1. The seismic correlation or covariance matrix.


2. Coherency methods based on the seismic covariance matrix.
3. The spectral decomposition of the covariance matrix.
4. Principal components analysis (PCA).
5. The Karhunen-Loéve, or KL, transform and filter.
6. Eigen-coherence (EC).
7. Energy ratio coherence (ERC).

More detailed theory and examples can be found in Jones and Levy (1987), Marfurt et al. (1999)
and Gersztenkorn and Marfurt (1999).

The seismic covariance matrix

All of our examples will consist of a seismic volume of n seismic traces, each with N samples,
which can be written as vectors in the following way:


xiT = ( x1i ,  , x Ni ), i = 1,  , n , (A5.1)

where T represents the matrix transpose operation. The seismic volume can consist of either the
full input volume, or of a small sub-volume that is moved sample-by-sample (both spatially and
temporally) around the volume. All of the trace vectors, and thus the seismic volume, can be
grouped in matrix format as


 x1T   x11  x N 1 
 
X =    =     . (A5.2)
  
 xnT   x1n  x Nn 
 

November 2018
HRS 10.6 Volume Attributes Guide 250

Note that the seismic traces are thus the rows of X, and therefore the notation used in equation
A5.2 for the subscripts is the reverse of what you are used to seeing in matrix notation (i.e. xij
usually means that i is the row and j is the column) since we are indicating the this is a transpose
of the column seismic vectors.

As pointed out by Jones and Levy (1987) the seismic volume can be thought of as what the
electrical engineering community referred to as a set of correlated signals. And they refer to the
paper by Kramer and Mathews (1956) in which the authors propose cross-correlating each of
these signals.

To put this into linear algebra terms, we can compute the un-normalized covariance matrix as
follows:

 N 2 N

 ∑ xi 1  ∑x in i1 x  c  c 
11 1n
 i =1 i =1
 
C = XX T =      =    , (A5.3)
N N  
∑ xin xi1  ∑ x 2 
in

 c n 1  c nn 
 i =1 i =1 

where C is the covariance matrix, the cij values represent the covariance between seismic traces i
and j, and cij = cji. Jones and Levy (1987) use the covariance matrix as a starting point for signal-
to-noise enhancement of the seismic volume using the Karhunen-Loeve (KL) Transform. More
recently, a number of authors have used the covariance matrix as the basis for what has become
known as coherency. We will describe these methods next.

Coherency based on the covariance matrix

A very good summary of semblance and coherence methods is given by Gersztenkorn and
Marfurt (1999) and the following remarks are adapted from that paper. The original coherence
method was proposed by Bahorich and Farmer (1995), and consisted of cross-correlating each
trace with its in-line and cross-line neighbor and averaging the result after normalizing the
energy of each cross-correlation. In the Appendix of Marfurt et al. (1999) this is called the c1
coherency algorithm, and can be written

November 2018
HRS 10.6 Volume Attributes Guide 251
1/ 2
 c12 c13 
c1 =  1/ 2 1/ 2  , (A5.4)
 ( c11c22 ) ( c11c33 ) 

which is based on the n = 3 covariance matrix. Marfurt et al. (1999) improved this estimate by
searching for optimum inline dip p and cross-line dip q:

cˆ1 = max p ,q c1 ( p, q ) , (A5.5)

where the correlation terms used in equation A5.5 come from the optimized covariance matrix
 c11  c1n  M∆t
given by C ( p, q) =     , where cij = ∑ xi (t − pxi − qyi )d j (t − px j − qy j ) .
  t = − M∆t
cn1  cnn 

For example, the figure below (A5.1) shows the p and q dip directions on a 3 x 3 subset of traces
(n = 9) with an N = 5 sample time window.

Figure A5.1. An illustration on inline and crossline dips p and q.

The first approach is computationally efficient but lacks robustness because it only uses three
traces. A more advanced approach by Marfurt et al. (1998) uses multi-trace semblance, in which

November 2018
HRS 10.6 Volume Attributes Guide 252

J traces within a sub-cube of the seismic data are correlated with each other. In the figure below
(A5.2), an elliptical window has been used.

Figure A5.2. A J trace elliptical window for correlation (from Marfurt et al. 1998).

Marfurt et al. (1999) call this the c1 algorithm write this mathematically as:

 
a T Ca
c2 = , (A5.6)
Tr (C )

where C is the n x n covariance matrix defined in equation A5.3, a is the n element vector given
 1
by a = [1,1,,1]T , and Tr(C) is the trace of C, or the sum of the values along it main
J
diagonal (as we will discuss later, this is equivalent to the sum of the eigenvalues of the
covariance matrix C). Note that the numerator is equation A5.6 is equivalent to the sum of all
the values in the covariance matrix C.

As with the c1 algorithm Marfurt et al. (1999) improved the c2 estimate by searching for optimum
inline dip p and cross-line dip q:

cˆ 2 = max p ,q c 2 ( p, q ) . (A5.7)

Finally, Gersztenkorn and Marfurt (1999) extended this to a c3 algorithm which they called
eigenstructure-based coherence. But before describing this method, we need to review the
concept of spectral decomposition of a matrix.

November 2018
HRS 10.6 Volume Attributes Guide 253

Spectral decomposition of the covariance matrix

The covariance matrix can be reduced to its diagonal form using spectral decomposition, which
can be written

 v11  v1n  λ1  0   v11  v n1 


C = VΛ V T =                , (A5.8)
   
v n1  v nn   0  λn  v1n  v nn 

where V is the matrix of column eigenvectors, each of length n, and Λ is a diagonal matrix of
eigenvalues λ1 through λ n, arranged in decreasing size. Note that each eigenvector can be
written in vector form as:

 v1i 
  
vi =  , i = 1,  , n . (A5.9)
 
v ni 

To compute the eigenvalues and eigenvectors of C we must solve the eigenvalue equation, given
  
by Cv = λv ⇒ (C − λI )v = 0 , where λ is a single eigenvalue (a scalar), I is the n x n identity

matrix and v is a single eigenvector. For the well-posed problem (i.e. all the seismic traces are
independent), there are n eigenvalues and eigenvectors, where n is the number of seismic traces.
Furthermore, for the correlation matrix, which is real and symmetric, all of the eigenvalues are
real and positive. Another very important property of the eigenvectors is that each eigenvector
has an autocorrelation equal to 1, and each pair of non-equal eigenvectors has a cross-correlation
equal to 0. That is:

  1, if i = j
viT v j =  . (A5.10)
0, if i ≠ j

In matrix form, this can be written V TV = VV T = I n , where In is the n x n identity matrix.

November 2018
HRS 10.6 Volume Attributes Guide 254

Principal Component Analysis (PCA)

Principal component analysis (PCA) involves transforming the input dataset X into a domain
where the new set of n vectors correlate to the diagonal matrix Λ. We compute the principal
components by multiplying the input data matrix by the transpose of the eigenvector matrix, or

T
 v11  vn1   x11  x N 1   v11  vn1   x1 
 
P = V T X =          =        , (A5.11)
     
v1n  vnn   x1n  x Nn  v1n  vnn   xnT 

where P is a rectangular matrix with the same dimensions as X called the principal component
matrix, written

 p1T   p11  p N 1 
 
P =    =     , (A5.12)
  
 pnT   p1n  p Nn 
 

By combining the above expressions we can write each principal component as

   
p1T = v1T X = v11 x1T +  + vn1 xnT
    (A5.13)
 T T T 
pn = vn X = v1n x1 +  + vnn xnT

That is, the ith principal component can be seen as the weighted sum of the n seismic traces, with

the weights equal to the values in vector vi . This may not be obvious by looking at the general
matrices, so let us show that it is true for the simple case of N = 3 and n = 2:

 p11 p21 p31   v11 v21   x11 x21 x31   v11 x11 + v21 x12 v11 x21 + v21 x22 v11 x31 + v21 x32 
p = =
 12 p22 p32  v12 v22   x12 x22 x32  v12 x11 + v22 x12 v12 x21 + v22 x22 v12 x31 + v22 x32 

November 2018
HRS 10.6 Volume Attributes Guide 255
  
 p1T   v11 x1T + v21 x2T 
from which we see that   T  =   T T  .
 p2  v12 x1 + v22 x2 

We have thus transformed the original seismic traces into new traces which are orthogonal (that
is, their cross-correlation is equal to 0) and auto-correlate to the successive eigenvalues. To
prove this mathematically using the properties of eigenvectors discussed earlier, note that:

PP T = V T XX TV = V T CV = V TVΛV TV = Λ .

As a final point, note that we can also recover the attributes from the principal components by a
linear sum. First, recall that we showed that the inverse and transpose of the eigenvector matrix
are identical. Therefore, if we multiply the principal component matrix by the eigenvector
matrix we recover the original seismic matrix:

VP = VV T X = I n X = X (A5.14)

In general, we can write:

  
x1T = v11 p1T +  + v1n pnT
   (A5.15)
T T 
xn = vn1 p1 +  + vnn pnT

Notice that the eigenvector components are vertical in equation (A5.15), whereas they were
horizontal in equation (A5.13). Therefore, although equation (A5.15) is similar to equation
(A5.13), there is a fundamental difference. This means that the jth trace is recovered by
multiplying each principal component by the jth value of the eigenvector corresponding to that
principal component, and summing the results. That is, to recover the jth trace, we can write:

  
x Tj = v j1 p1T +  + v jn pnT . (A5.16)

November 2018
HRS 10.6 Volume Attributes Guide 256

The KL transform and filter

The Karhunen-Loéve, or KL, transform is identical to the PCA theory just discussed. However,
it is distinguished from PCA in that it is usually applied as a filter. The KL filter involves
leaving out some subset of the principal components (or, equivalently, the eigenvectors or
eigenvalues) from the inverse transform given in equation (A5.13). For example, if we keep
only the first m principal components, we get:

~  
x1T = v11 p1T +  + v1m pmT
   (A5.17)
~  
xnT = vn1 p1T +  + vnm pmT

where m < n. Or, if we leave out the first k principal components, we get:

~  
x1T = v1k +1 pkT+1 +  + v1n pnT
   (A5.18)
~  
xnT = vnk +1 pkT+1 +  + vnn pnT

where k > 1. Although in principle any combination of principal components could be left out
of the KL filter, in practice only the examples given in equations (17) and (18) are used. In
analogy to a bandpass filter, we could call equation (17) the high-pass (or low cut) filter and
equation (18) the low-pass (or high cut) filter. If we let k = m= n/2, then these two filters will
separate the data into two components that, when added together, will recover the original
dataset.

Eigen-coherence and energy ratio coherence

In the appendix to their paper, Jones and Levy (1987) state the mathematical result
n N
trace( Λ ) = trace(C ) = ∑∑ xij2 = E ( X ) , (A5.19)
i =1 j =1

where trace represents the sum along the main diagonal of a matrix and E(X) is the energy, or
sum of squares, of the values in dataset X. In other words, the sum of the eigenvalues is equal to
the sum of the trace autocorrelations and also to the total energy of the seismic volume, or the

November 2018
HRS 10.6 Volume Attributes Guide 257
~
sum of the squared amplitudes of all traces. Note that if we let Λ represent the filtered
eigenvalue matrix found by setting some of the eigenvalues to zero, we can also write:

~ n N
~
trace( Λ ) = ∑∑ ~xij2 = E ( X ) , (A5.20)
i =1 j =1

~
where ~ xij represent the samples in the KL filtered volume X . Gersztenkorn and Marfurt (1999)
compute what they call eigenstructure-based coherence (which we will shorten to eigen-
coherence, or EC) from sub-cube of a larger seismic volume. That is, the matrix X from equation
2 is a sub-cube of dimension n = m 2 traces by N samples which moves through the larger volume
sample-by-sample in the spatial and temporal directions. From this cube, the coherence C is
computed by equation 3 and the eigenvalue matrix Λ is computed as in equation 4. For each
sub-cube, the eigen-coherence value is computed by the formula

λmax
EC = n
, (A5.21)
∑λ
i =1
i

and is placed at the center of the sub-cube to create a volume of eigen-coherence values. That is,
we compute the maximum eigenvalue of C divided by the sum of the eigenvalues of C, where
the denominator is equivalent to the trace of C. We next define energy-ratio coherence (ERC) for
each sub-volume as
n N

~
E( X )
∑∑ ~x
i =1 j =1
2
ij

ERC = = n N
, (A5.22)
E( X )
∑∑ x
i =1 j =1
2
ij

which is the energy of the KL filtered sub-volume divided by the energy of the original sub-
volume. Note that if the KL filter keeps only the largest principal component, the energy-ratio
coherence method will give the same answer as the eigen-coherence method. As an extension of
the original method, the Energy Ratio Attribute code allows the user two further options:

1. To extract any number of energy-ratio eigen-components.


2. To perform optimum dip-steering in the inline and cross-line directions.

Option number 1 can be written in eigen-coherence form as

λi
ECi = n
, (A5.23)
∑λ
i =1
i

November 2018
HRS 10.6 Volume Attributes Guide 258

Appendix 6: Edge Enhancement Attributes


The edge enhancements attributes under Volume Attributes are all based on the differentiation of
some seismic attribute in the x, y or time direction. Thus, before describing the algorithms
themselves, we will briefly discuss the effect of differentiating the seismic trace. Let us first
consider a one-dimensional Gaussian shape which is given by the formula:

 t2 
g (t ) = A exp − 2  (A6.1)
 σ 

where σ gives the width of the Gaussian and A gives its amplitude. The derivative of this
function is

dg (t ) t  t2 
= −2 A 2 exp − 2  . (A6.2)
dt σ  σ 

It is clear from Figure A6.1, where these two functions are plotted, that the derivative of the
Gaussian function has a zero crossing and that this zero crossing corresponds to the maximum
value on the original Gaussian function. That is, the derivative turns peaks into edges.

Figure A6.1: The Gaussian pulse (heavy line) and its derivative (dotted line) in the time direction.

November 2018
HRS 10.6 Volume Attributes Guide 259

Another way to look at the effect of the derivative on the seismic trace is to consider the effect of
differentiation on a single frequency component given by f (t ) = exp(iω t ) = cos(ω t ) + i sin(ω t ) ,
where i = − 1 , ω = 2πf and f = frequency. The derivative of this function is given by

df (t )
= iω exp(iω t ) . (A6.3)
dt

Since multiplication by the complex number i can be thought of as a 90 degree phase shift,
differentiation rotates the trace by 90 degrees and increases the frequency content since the
spectrum is being multiplied by the angular frequency term ω. Since the seismic volume is a
function of the inline coordinate x, the cross-line coordinate y and the vertical coordinate t, we
should more formally use the partial derivative operator to define the vertical derivative, or
∂f ( x, y , t )
. In a similar fashion the partial derivatives in the x and y directions are respectively
∂t
∂f ( x, y , t ) ∂f ( x, y , t )
and . In the equations that follow in the next section, we simplify our
∂x ∂x
notation by dropping the x, y, and t terms from the function f since they are implied by the partial
differentiation. The inline and crossline derivatives can be illustrated by the same figure as
before except that it has been rotated to show horizontal distance rather than vertical time, as
shown in Figure A6.2.

Figure A6.2: The Gaussian pulse (heavy line) and its derivative (dotted line) in the inline or cross-line
direction.

November 2018
HRS 10.6 Volume Attributes Guide 260

As with the vertical, or time, derivative the spatial derivative also converts a peak to a zero-
crossing, and thus enhances any edges found in the spatial x or y directions. We can also apply
the frequency analysis of equation A6.3 to the spatial derivatives, which again tells us that
differentiation in the horizontal direction increases the high spatial frequency content and applies
a ninety degree phase shift. From these properties of the derivative in the time and horizontal
directions and the temporal and spatial frequency domains, it appears reasonable that the
derivative could be used to detect subtle changes in the seismic signal that relate to edges such as
faults or fractures. It should also be noted that the Gaussian derivatives shown in Figures A6.1
and A6.2 were used by al-Dossary et al. (2006) in their implementation of the Canny filter.

Next, let us discuss the attributes themselves. The edge detection attributes found in Volume
Attributes are taken from the summary paper by Cooper and Cowan (2008), in which they
review the potential field literature and summarize a number of algorithms for the edge
enhancement of potential-field data that had been developed by previous authors. The authors
also introduce their own new algorithm which is based on normalized statistics.

Total horizontal derivative (TDX)

A commonly used edge-detection filter (Cooper and Cowan, 2008) is the total horizontal
derivative (TDX) which is defined mathematically as

2 2
 ∂f   ∂f 
TDX =   +   . (A6.4)
 ∂x   ∂y 

The two dimensional filter TDX (equation A6.4) is also known as the Sobel filter and is used in
image processing to sharpen photographic images. An excellent discussion of the Sobel filter for
image enhancement is given by Gonzalez and Woods (2008). They point out that the two spatial
derivatives found in equation A6.4 can be expressed as the gradient vector given by

 ∂f 
 
∇f =  ∂∂fx  . (A6.5)
 
 ∂y 

Thus, the TDX operator in equation A6.4 is the magnitude, or length, of the gradient operator,
which can be written mathematically as ∇f . The Sobel filter is actually the name given to the
numerical implementation of the spatial derivatives on a 3 x 3 grid, as shown in Figure A6.3.
November 2018
HRS 10.6 Volume Attributes Guide 261

(a) (b) (c)


Figure A6.3: The numerical implementation of the Sobel filter, where (a) shows the 3 x 3 grid notation,
(b) shows the weights used for the x derivative, and (c) shows the weights used for the y derivative,
where x is the horizontal direction and y is the vertical direction.

Figure 4(a) explains the notation for the nine values in the 3 x 3 grid, Figure 4(b) shows the
numerical implementation of the x derivative, and Figure 4(c) shows the numerical
implementation of the y derivative, where x is the horizontal direction and y is the vertical
direction. This can be written mathematically as follows, where equation A6.6 is the average
approximate gradient in the x direction and equation A6.7 is the average approximate gradient in
the y direction:

∂f
= ( z 3 + z 6 + z 9 ) − ( z1 + z 4 + z 7 ) , and (A6.6)
∂x

∂f
= ( z 7 + z8 + z 9 ) − ( z1 + z 2 + z 3 ) . (A6.7)
∂y

Note that the TDX, or gradient, filter just discussed should not be confused with the Laplacian
filter discussed by Chopra and Marfurt (2012), which involves the sum second partial derivatives
rather than the sum of first partial derivatives squared. Chopra and Marfurt (2012) refer to the
output of the Laplacian filter as mean amplitude curvature, yet another attribute that is useful for
estimating seismic discontinuities.

Tilt Angle (TT)

Miller and Singh (1994) introduced the concept of the normalized vertical derivative, which they
call the tilt angle, T, and which they express as

November 2018
HRS 10.6 Volume Attributes Guide 262

 
 ∂f   ∂f 
 ∂t   ∂t 
TT = arctan   = arctan  . (A6.8)
 TDX 
2 2
  ∂f   ∂f  
   +     
  ∂x   ∂y  

Note that the tilt angle measures the dip between the vertical derivative and the magnitude of the
horizontal gradient, or TDX, as expressed in equation A6.4. It should be pointed out that since
we are using seismic time instead of depth, the term inside the bracket of the arctan in equation
A6.8 actually has units of velocity instead of being unitless as it would be if depth and distance
were used. However, by assuming that we have normalized the units in all three dimensions
(e.g. amplitude per sample) this scaling difference can be neglected.

Total horizontal derivative of the tilt angle (THDR)

Combining the concepts of the tilt angle, TT, and the magnitude of the horizontal derivative,
TDX, Verduzco et al. (2004) introduced the total horizontal derivative (THDR) of the tilt angle,
given by

2 2
 ∂T   ∂T 
THDR =   +   . (A6.9)
 ∂x   ∂y 

Although the THDR operation looks like a single derivative as expressed in equation A6.9, it is
actually seen to be a second derivative of the original data, when combined with equation A6.8.
Thus, as pointed out by Cooper and Cowan (2008), this operator can enhance the noise of the
data is you are not careful with the initial processing.

Normalized amplitude horizontal gradient

Wijns et al. (2005) introduced the theta map

2 2
 ∂f   ∂f 
  +  
 ∂x   ∂y 
. cos θ = , (A6.10)
2 2 2
 ∂f   ∂f   ∂f 
  +   +  
 ∂x   ∂y   ∂t 

November 2018
HRS 10.6 Volume Attributes Guide 263

2 2 2
 ∂f   ∂f   ∂f 
where   +   +   is called the analytic signal amplitude.
 ∂x   ∂y   ∂t 

Normalized standard deviation ratio of the derivatives

Finally, Cooper and Cowan (2008) introduce the normalized standard deviation ratio of the
derivatives, given as

∂f
σ  
NSTD =  ∂t  . (A6.11)
 ∂f   ∂f   ∂f 
σ  +σ  +σ 
 ∂x   ∂y   ∂t 

In their paper, the authors comment NSTD attribute makes the large-amplitude and small-
amplitude edges visible simultaneously, therefore reveals more detail information when the input
data are relatively smooth.

November 2018
HRS 10.6 Volume Attributes Guide 264

Appendix 7 Theory of Curvature

Introduction

Curvature methods are based on a paper by Roberts (2001) in which he used differential
geometry to show that the curvature of some event in the earth’s subsurface could be estimated
from a time structure map by fitting the local quadratic surface to this map. The following figure
(Figure A7.1), taken from the Roberts (2001), illustrates the concept in simple terms:

Figure A7.1. For a point P on a curve, the curvature can be defined by the radius of
curvature R of an “osculating” circle that is fit to the curve (from Roberts, 2001).

The curvature, K, is defined as the reciprocal of the radius of curvature, or K = 1/R.

The sign of the curvature can be understood by looking at a simple 2D structure (Figure A7.2):

Figure A7.2. Sign convention for curvature, where the arrows represent normal vectors to the surface (from Roberts, 2001).

November 2018
HRS 10.6 Volume Attributes Guide 265

When we extend our structure to three dimensions, as shown in Figure A7.3 below, we can then
define four basic types of curvature:

Kmin = the minimum curvature,

Kmax = the maximum curvature,

Kd = the curvature in the dip direction, and

Ks = the curvature in the strike direction.

Figure A7.3. Curvature in 3D, where Kmin is the minimum curvature, Kmax is the maximum
curvature, Kd is the dip curvature and Ks is the strike curvature (from Roberts, 2001).

In addition to these four obvious types of curvature, there are also two other important curvatures
that are defined from the minimum and maximum curvatures, Km , the mean curvature, which is
the average of the two, and Kg, the Gaussian curvature, which is the product of the two:

Km = (Kmax + Kmin)/2, and

Kg = Kmax Kmin.

November 2018
HRS 10.6 Volume Attributes Guide 266

The advantage of using mean and Gaussian curvature is that it gives us a way of classifying
structures, as shown in this figure (Figure A7.4) from Roberts:

Figure A7.4. Classification of structures based on Gaussian and mean curvature (from Roberts, 2001).

As will be discussed in more detail later in the advanced theory of curvature, the computation of
the various curvature attributes involves fitting a local quadratic surface given by:

z ( x, y ) = ax 2 + by 2 + cxy + dx + ey + f , (A7.1)

which is obviously the sum of an ellipsoid given by the first three terms ax 2 + by 2 + cxy and a
plane given by the last three terms dx + ey + f . From these coefficients, the six different types of
curvature described above can be computed, and are written in full in the advanced theory
section below. In addition, Roberts (2001) introduces several other types of curvature

Roberts (2001) developed a method for estimating the coefficients a through f from a mapped
surface. Figure A7.5a shows the seismic structure map used by Roberts (2001) and Figure A7.5b
shows one of the seismic lines used to pick the structure.

November 2018
HRS 10.6 Volume Attributes Guide 267

(a) (b)
Figure A7.5 This figure shows (a) the time structure map used by Roberts (2001) to estimate curvature
attributes, and (b) one of the picked seismic lines used to produce the mapped surface shown in (a).

Roberts ran a 3x3 grid over this map and obtained the 9 structure values z1 through z9, as shown
in figure A7.6:

Figure A7.6 The 3x3 grid used by Roberts (2001) to estimate the coefficients in equation A7.1.

The coefficients were then computed by Roberts (2001) as follows:

1 ∂ 2 z z1 + z3 + z4 + z6 + z7 + z9 z2 + z5 + z8
a= = − , (A7.2)
2 ∂x 2 12∆x 2 6∆x 2

1 ∂ 2 z z1 + z2 + z3 + z7 + z8 + z9 z4 + z5 + z6
b= = − , (A7.3)
2 ∂y 2 12 ∆y 2 6∆y 2

November 2018
HRS 10.6 Volume Attributes Guide 268

∂2 z z +z −z −z
c= = 3 7 1 9, (A7.4)
∂x∂y 4∆x∆y

∂z z3 + z6 + z9 − z1 − z4 − z7
d= = , (A7.5)
∂x 6∆x

∂z z1 + z2 + z3 − z7 − z8 − z9
e= = , (A7.6)
∂y 6∆y

2( z2 + z4 + z6 + z8 ) − ( z1 + z3 + z7 + z9 ) + 5z5
f = , (A7.7)
9

Figure A7.7, from Roberts (2001), the maximum and minimum curvature attributes derived from
the map in Figure A6.5. The maximum curvature plot has values of low curvature greyed-out.

Figure A7.7 Displays of maximum curvature (left) and minimum curvature (right). (Roberts, 2001)

The maximum, minimum, dip, strike, mean and Gaussian curvatures discussed above are a
function of the first five coefficients in equation A7.1 (a through e). Note the constant f never
enters into the formulations because derivatives are used to generate the curvature terms,
meaning that f is cancelled. There are two other attributes that are only a function the in-line and
cross-line dip terms d and e. These are the total dip γ and azimuth α, given by:

( )
γ = tan −1 d 2 + e 2 , and (A7.8)

November 2018
HRS 10.6 Volume Attributes Guide 269

e
α = tan −1   . (A7.9)
d 

Let us define a new constant g, which is the sum of the squares of d and e, or the square of the
tangent of γ, given by:

g = d 2 + e 2 = tan 2 γ , (A7.10)

Using this new constant allows us to write the definitions of the attributes as:

a (1 + e 2 ) + b(1 + d 2 ) − cde
Km = , (A7.11)
(1 + g ) 3 / 2

4ab − c 2
Kg = , (A7.12)
(1 + g ) 2

K max = K m + K m2 − K g , (A7.13)

K min = K m − K m2 − K g , (A7.14)

2( ad 2 + cde + be 2 )
Kd = , (A7.15)
g 1+ g

2( ae 2 − cde + bd 2 )
Ks = , (A7.16)
g 1+ g

If the inline and crossline dips d and e are both set to zero, note that Km simplifies to a+b and Kg
to 4ab – c2. Kmax and Kmin then simplify to what are called most positive curvature K+ and most
negative curvature K−, given by:

K + = ( a + b) + ( a − b) 2 + c 2 , and (A7.17)

K − = ( a + b) − ( a − b) 2 + c 2 , (A7.18)

Figure A7.8, from Roberts (2001), shows most negative and positive curvature:

November 2018
HRS 10.6 Volume Attributes Guide 270

Figure A7.8. Most negative curvature (left) and most positive curvature (right) (Roberts, 2001).

Finally, the strike curvature of equation A7.16 multiplied by g gives contour curvature, Kc:

2( ae 2 − cde + bd 2 )
Kc = . (A7.19)
1+ g

Also, note that if d and e equal zero, the dip and strike attributes (equations A7.15 and A.7.6) are
also equal to zero. Finally, Koenderink and van Doorn (1992) proposed the curvedness attribute,
Kn , which is defined as the harmonic mean of Kmax and Kmin:

2 2
K max + K min
Kn = , (A7.20)
2

The one curvature attribute from Roberts (2011) that has not been included under Volume
Curvature is the shape index, Si, which is written as

2  K + K min 
Si = tan −1  max . (A7.21)
π  K max + K min 

November 2018
HRS 10.6 Volume Attributes Guide 271

Advanced Theory of Curvature

As shown by Rich (2013), the curvature attributes of a surface given by S = z(x,y) and
   
parameterized as S (u, v ) = ui + vj + z (u, v )k , can best be understood using the differential of the
Gauss map, called the Weingarten matrix and written as

        −1
 p q  N ⋅ Suu N ⋅ Suv   Su ⋅ Su Su ⋅ Sv 
W =  = −  N ⋅ S        , (A7.19)
 r s  uv N ⋅ S vv   Su ⋅ S v Sv ⋅ Sv 

      
 ∂S  ∂S  ∂2S  ∂2S  ∂2S  Su × Sv
where S u = , Sv = , Suu = 2 , S vv = 2 , Suv = , and N =   is the unit surface
∂u ∂v ∂u ∂v ∂u∂v Su × Sv
normal. Let us apply equation A7.19 to the quadratic surface given by

z ( x, y ) = ax 2 + by 2 + cxy + dx + ey + f , (A7.20)

with a parameterization given by


S (u, v ) = uiˆ + vˆj + ( au 2 + bv 2 + cuv + du + ev + f )kˆ . (A7.21)

First, we compute the first and second partial derivatives and the unit surface normal:


 ∂S ∂[uiˆ + vˆj + ( au 2 + bv 2 + cuv + du + ev + f )kˆ ] ˆ
Su = = = i + ( 2au + cv + d )kˆ ,
∂u ∂u

November 2018
HRS 10.6 Volume Attributes Guide 272

 ∂S ∂[uiˆ + vˆj + ( au 2 + bv 2 + cuv + du + ev + f )kˆ ] ˆ
Sv = = = j + ( 2bv + cu + e)kˆ ,
∂v ∂v
 
 ∂ 2 S ∂[iˆ + ( 2au + cv + d )kˆ ] ˆ
 ∂ 2 S ∂[iˆ + ( 2bv + cu + e)kˆ ]
S uu = 2 = = 2ak , S vv = 2 = = 2bkˆ,
∂u ∂u ∂v ∂v
 
 ∂2S ∂[iˆ + ( 2au + cv + d )kˆ ] ∂ 2 S ∂[iˆ + ( 2bv + cu + e)kˆ ]
S uv = = = = = ckˆ,
∂u∂v ∂v ∂v∂u ∂u

iˆ ˆj kˆ
 
Su × Sv = 1 0 2au + cv + d = −( 2au + cv + d )iˆ − ( 2bv + cu + e) ˆj + kˆ, and
0 1 2bv + cu + e
 
 S ×S − ( 2au + cv + d )iˆ − ( 2bv + cu + e) ˆj + kˆ
N = u v = .
Su × Sv 1 + ( 2au + cv + d ) 2 + ( 2bv + cu + e) 2

Next, we compute the matrix coefficients, which are

   
Su ⋅ Su = 1 + ( 2au + cv + d ) 2 , Su ⋅ Sv = ( 2au + cv + d )( 2bv + cu + e),

    2a
S v ⋅ S v = 1 + ( 2bv + cv + e) 2 , N ⋅ S uu = ,
1 + ( 2au + cv + d ) 2 + ( 2bv + cu + e) 2

  c
N ⋅ S uv = , and
1 + ( 2au + cv + d ) 2 + ( 2bv + cu + e) 2
  2b
N ⋅ S vv = .
1 + ( 2au + cv + d ) 2 + ( 2bv + cu + e) 2

To simplify from now on, we will compute the solution at u = v = 0. This gives the components
of the Weingarten matrix as

−1
−1 2a c  1 + d 2 de 
W=    2
.
1 + d 2 + e 2  c 2b  de 1 + e 

November 2018
HRS 10.6 Volume Attributes Guide 273

Performing the matrix inversion gives

−1  2a c  1 1 + e 2 − de 
W=   2 2  2
.
1 + d 2 + e 2  c 2b 1 + d + e  − de 1 + d 

Finally, computing the matrix product gives

 p q
W = , (A7.22)
 r s

cde − 2a (1 + e 2 ) 2ade − c(1 + d 2 ) 2bde − c(1 + e 2 ) cde − 2b(1 + d 2 )


where p = , q = , r = , s = .
(1 + d 2 + e 2 ) 3 / 2 (1 + d 2 + e 2 ) 3 / 2 (1 + d 2 + e 2 ) 3 / 2 (1 + d 2 + e 2 ) 3 / 2

Many curvature attributes can be calculated from an eigen-decomposition of the Weingarten


matrix. The eigenvalues and eigenvectors of W can be computed by solving for

 p − λi q   v1i  0
 r = , i = 1, 2 , (A7.23)
 s − λi  v2i  0

  v11   v12 
where λi = λ1 , λ2 are the two eigenvalues and vi =  ,   are the two eigenvectors associated
v21  v22 
with W. The eigenvalues are found by setting the determinant of the matrix in equation A7.22
equal to zero and solving the resulting quadratic equation, or

p − λi q ( p + s) ( p − s)2
= 0 ⇒ λi2 − ( p + s )λ + ( ps − qr )λ ⇒ λi = ± + qr .
r s − λi 2 4

The eigenvectors are then given by substituting the eigenvalues in equation A7.23, to give

November 2018
HRS 10.6 Volume Attributes Guide 274

q λ −s
( p − λi )v1i + qv2i = 0 ⇒ v1i = v2i , or rv1i + ( s − λi )v2i = 0 ⇒ v1i = i v2i .
λi − p r

Substituting the explicit values of p, q, r and s that were computed from the quadratic surface
and given in equation A7.22, we get eigenvalues of

cde − a (1 + e 2 ) − b(1 + d 2 ) ± α + β
λi = , (A7.24)
(1 + d 2 + e 2 ) 3 / 2

where α = ( a (1 + e 2 ) − b(1 + d 2 )) 2 and β = ( 2ade − c(1 + d 2 )) ( 2bde − c(1 + e 2 )) , and eigenvectors

2ade − c(1 + d 2 )
v1i = v , or (A7.25)
λi (1 + d 2 + e 2 ) 3 / 2 − cde + 2a (1 + e 2 ) 2i

λi (1 + d 2 + e 2 ) 3 / 2 − cde + 2b(1 + d 2 ) (A7.26)


v1i = v2i .
2bde − c(1 + e 2 )

The eigenvalues found in equation A7.24 are the negative of the principal curvatures, Kmax and
Kmin, which can therefore be written as

a (1 + e 2 ) + b(1 + d 2 ) − cde + α + β
K max = , and (A7.27)
(1 + d 2 + e 2 ) 3 / 2

a (1 + e 2 ) + b(1 + d 2 ) − cde − α + β
K min = . (A7.28)
(1 + d 2 + e 2 ) 3 / 2

where α = ( a (1 + e 2 ) − b(1 + d 2 )) 2 and β = ( 2ade − c(1 + d 2 )) ( 2bde − c(1 + e 2 )) . These attributes


are described by Roberts (2001), although are not written down in that paper in the explicit form
given in equations A7.27 and A7.28. Roberts (2001) does however give the explicit form of the

November 2018
HRS 10.6 Volume Attributes Guide 275

mean curvature Km and Gaussian curvature Kg, which are, respectively, the sum and product of
the principal curvatures and given as

a (1 + e 2 ) + b(1 + d 2 ) − cde
Km = , and (A7.29)
(1 + d 2 + e 2 ) 3 / 2

4ab − c 2
Kg = . (A7.30)
(1 + d 2 + e 2 ) 2

The eigenvectors allow us to compute the angle of the direction of the principal curvature, θ.
The tangent of this angle is given as the ratio of the eigenvector components, which can be given
by either equation A7.25 or A7.26. Choosing equation A7.26, we get

v1i λi (1 + d 2 + e 2 ) 3 / 2 − cde + 2b(1 + d 2 )


tan θ = = . (A7.31)
v2i 2bde − c(1 + e 2 )

Note that the factor (1 + d2 + e2) is missing in equation 13 from Rich (2013), and also that the
denominator in equation 13 from Rich (2013) is the numerator from equation A7.25 given
earlier. However, numerical studies using MathCad confirm that equation A7.31 above is
correctly derived. As pointed out by Rich (2013), equation A7.31 is different than the expression
used in the geophysical literature, which normally assumes that d = e = 0. In this case, we get
the simpler form of the eigenvalues given by

λi = −( a + b) ± ( a − b) 2 + c 2 . (A7.32)

which leads to the simpler form of the angle given by

2
 c 
2 2
−1± 1+  
− ( a − b) ± ( a − b) + c a −b c
tan θ i = = ⇒ tan 2θ = . (A7.33)
c c a−b
a−b

Finally, Roberts (2001) also defines two other types of curvature attributes: most positive
curvature K+ and most negative curvature K−. These attributes turn out to be identical to the two
eigenvalues given in equation A7.14, where, as might be expected, most positive curvature is the
eigenvalue with the + term and most negative curvature the one with the – term.

November 2018
HRS 10.6 Volume Attributes Guide 276

Instantaneous Curvature

At first glance, there appears to be no relationship between instantaneous attributes, coherency


and curvature. However, recall that in the coherency method (Appendix 5) it is important to find
the x and y dips, called p and q. Also note that the coefficients d and e in equation A7.1 are
identical to the inline and crossline dips p and q. Furthermore, if we differentiate equation (A7.1)
we find that at x = y = 0 the first three quadratic coefficients can be written as:

1  ∂p  1  ∂q  1  ∂p ∂q 
a=  , b =  , and c =  + .
2  ∂x  2  ∂y  2  ∂y ∂x 

Thus, we can re-write equation A7.1 as a function of the dips and their x and y derivatives, as
follows:

1  ∂p  2 1  ∂q  2 1  ∂p ∂q 
z ( x, y ) =   x +   y +  +  xy + px + qy + f , (A7.34)
2  ∂x  2  ∂y  2  ∂y ∂x 

where the f term is simply a vertical shift that can be ignored. The connection between
instantaneous and correlation attributes can therefore be found if we can find a connection
between dip and instantaneous attributes.

This relationship was first described in a paper by Barnes (1996) that pre-dated most of the new
attributes. Barnes (1996) introduced two new spatial instantaneous attributes and showed how
these attributes could be used to find both the dips and the azimuth. Although Barnes (1996) was
motivated by the work of Scheuer and Oldenburg (1988) on phase versus group velocity, his
method takes us back full circle to Gabor (1947). Figure A7.9 shows a 3D volume over the
Boonsville field in Texas, with the inline, crossline and time directions indicated. Note that when
we discussed instantaneous frequency (Appendix 1), we only computed the frequency attribute
in the time direction. However, since the Hilbert transform is actually a function of three
coordinates (i.e. Φ(t,x,y)) Barnes noted that we can also compute spatial frequencies in the inline
(x) and crossline (y) directions.

November 2018
HRS 10.6 Volume Attributes Guide 277

Figure A7.9. The 3D Boonsville seismic volume.

Analogous to instantaneous frequency, Barnes (1996) therefore defined the instantaneous


wavenumbers kx and ky (using the Marfurt (2006) notation) as:

∂h ∂s
−h s
∂Φ (t , x, y )
kx = = ∂x 2 ∂x , (A7.35)
∂x A

and

∂h ∂s
−h
s
∂Φ (t , x, y ) ∂y ∂y
ky = = 2
. (A7.36)
∂y A

As with instantaneous frequency, the derivative operation can be done either in the frequency
domain or using finite differencing up to a given order. The instantaneous time dips in the x and
y direction, p and q, are then given as:

kx T
p= = , and (A7.37)
ω λx

ky T
q= = , (A7.38)
ω λy

where T = period and λ = wavelength. Using the dips p and q, the azimuth φ and true time dip θ
are then given by:

φ = tan −1 ( p / q) = tan −1 ( k x / k y ) , and (A7.39)

November 2018
HRS 10.6 Volume Attributes Guide 278

θ= p2 + q2 . (A7.40)

To understand this concept, we created a seismic volume that consisted of a dipping cosine
wave, which is shown in Figure A7.10. The period, wavelength and time dip are shown on the
plot, where the period is found by measuring the time between successive peaks, and the
wavelength from the picked horizon.

Figure A7.10. A dipping cosine wave with the period and wavelength illustrated. Note that the dip is given by the
period/wavelength.

Next, let’s look at the dipping cosine wave in 3D, as shown in Figure A7.11. In this display, we
have sliced it along the x, y and time axes. For this dipping event, it is clear how the dips and
azimuths are related to the instantaneous frequency and wavenumbers. Note that in this
example, the p dip is steeper than the q dip, so the true dip, which is found along a line which is
orthogonal to strike, we be computed from equation (24).

November 2018
HRS 10.6 Volume Attributes Guide 279

Figure A7.11. A dipping cosine wave in 3D, where the inline and crossline dips p and q are illustrated,
as well as the true dip θ and azimuth φ.

Next, we will illustrate these instantaneous spatial attributes using the Boonsville dataset. There
are many possible displays based on the “building blocks” for these attribute, which include the
derivatives of seismic amplitude, Hilbert transform and phase in the t, x, and y directions, the
three frequencies, and p, q, true dip and azimuth. Note that the many divisions involved in the
computations make this method very sensitive to noise when compared to correlation based
methods. In Figure A7.12, the true dip and azimuth are compared with the time slice from the
Boonsville dataset.

(a) (b) (c)

Figure A7.12. A display of instantaneous dip and azimuth attributes, where the 1000 msec time slice is shown in (a),
instantaneous dip is shown in (b) and instantaneous azimuth is shown in (c).

November 2018
HRS 10.6 Volume Attributes Guide 280

Note that as part of the correlation curvature method, azimuth can also be estimated if we replace
the p dip attribute by coefficient d in equation (12) and the q attribute by coefficient e in equation
(12), and then use equation (24) to compute azimuth. A comparison between the two azimuth
methods is shown in Figure A7.13, where we see good similarity between the two plots, but
higher frequency detail in the instantaneous azimuth calculation.

(a) (b)
Figure A7.13. A comparison of instantaneous azimuth and correlation azimuth attributes, where instantaneous azimuth is shown
in (a), and correlation azimuth is shown in (b). The time slice displayed in the plot is at 1000 msec.

Also, recall from equation (18) that all of the curvature attributes can be derived from the
instantaneous dip attributes described earlier, using a second differentiation, as was observed by
al-Dossary and Marfurt (2006). Figure A7.14 shows a comparison between maximum curvature
derived the two different ways, where Figure A7.14(b) shows instantaneous maximum curvature
and Figure A7.14(c) shows correlation maximum curvature. The two different curvature
methods give similar results. However, although instantaneous curvature shows a higher
frequency lineament than correlation curvature, as shown by the ellipses, it misses the feature
shown by the circles which the correlation curvature captured. Thus, each method has its own
strengths.

November 2018
HRS 10.6 Volume Attributes Guide 281

(a) (b) (c)

Figure A7.14. A display of maximum curvature attributes computed using instantaneous and correlation methods, where the
1000 msec time slice is shown in (a), instantaneous maximum curvature is shown in (b) and correlation maximum curvature is
shown in (c). The ellipses show a lineament feature and the circles show circular feature.

Direct calculation can lead to instantaneous frequency and instantaneous wavenumbers fluctuate
rapidly with spatial and temporal location. Barnes (2000) proposed the weight strategy for
improving the performance on instantaneous attributes.

Let’s start from the first moment formula of a seismic trace


∞ ∞

−∞
f (t) R(t) 2 dt
=
∫0
f (ω ) A(ω ) 2 df
(A7.41)
∞ ∞
∫ ∫ A(ω ) df
2 2
R(t) dt
−∞ 0

where f(t) is instantaneous frequency, R(t) is instantaneous amplitude, f(ω) and A(ω) are
frequency and amplitude in the Fourier domain. The first formula states that the expression of a
seismic trace in time domain is equal to itself in frequency domain.

A weight instantaneous frequency is computed in a window of the trace. The window must be
purely real and should have little influence on the local Fourier transform. Its length determines
the temporal resolution (Barnes, 2000). The first-moment formula for a windowed analytic trace
is
∞ ∞
∫−∞
f (t) R(t) 2 w(t) 2 dt
=

0
f (ω ) A(ω ) 2 df
(A7.42)
∞ ∞
∫ ∫ A(ω ) df
2 2 2
R(t) w(t) dt
−∞ 0

The equation (A7.42) indicates that the instantaneous frequency weighted by the instantaneous
power scaled by a window squared equals the average Fourier spectral frequency calculated in
that window. Computing weighted average instantaneous frequency in a running window on a
trace produces a local frequency trace, which equals the center frequency of a spectrogram.

November 2018
HRS 10.6 Volume Attributes Guide 282

An example of weight instantaneous frequency and instantaneous frequency is shown in Figure


A7.15. Weight instantaneous frequency (red) captures the main variation of instantaneous
frequency (blue), as well as reducing the spikes. This weight strategy utilizes instantaneous
amplitude to weight the instantaneous frequency, therefore it is a data adaptive process. The
same weight fashion can be also applied to instantaneous wavenumbers Kx and Ky.

Figure A7.15. Comparison of instantaneous frequency and weight instantaneous frequency. Weight instantaneous frequency (red)
captures the main variation of instantaneous frequency (blue), as well as reducing the spikes.

November 2018
HRS 10.6 Volume Attributes Guide 283

Appendix 8: Theory of the Phase Congruency Method


The phase congruency (PC) algorithm was developed to detect corners and edges on two-
dimensional (2D) digital images (Kovesi, 1996). To understand the concept behind phase
congruency in 2D space, it is instructive to first understand the algorithm in one dimension (1D).
Figure A8.1(a), from Kovesi (2003), shows four terms in the Fourier series for a step function.

(a) (b)
Figure A8.1. The concept of phase congruency in 1D, where (a) shows that the individual terms in a
Fourier series will be in-phase at a step, and (b) shows a polar plot of the real and imaginary Fourier terms
used to compute phase congruency (from Kovesi, 2003).

Note that these terms are all in-phase at the step, and therefore an analysis of their phase
components indicate the presence of the step. To quantify this concept, Kovesi (2003) plots the
real and imaginary values for all four terms in the polar plot shown in Figure A8.1(b). We then
line up the vectors as shown in the figure, with their related amplitude and phase values for all
four terms. Extending this to n terms, and using the notation used in Figure A8.1(b), Kovesi
(2003) shows that phase congruency in 1D space is given by

∑ A ( x ) cos(φ ( x ) − φ ( x ))
E ( x) n n
PC ( x ) = = n
, (A8.1)
∑ A ( x)
n
n ∑ A ( x) n
n

where An (x) and φn (x) are the length and phase angle of each of the individual n amplitude
vectors, and E(x) and φ (x ) are the length and phase angle of the summed vectors. Kovesi
(2003) then shows how to extend equation A8.1 to the two-dimensional image domain. This is
done using oriented 2D Gabor wavelets in the Fourier domain. In the initial implementation,
Kovesi used 2D Gaussian wavelets, but in a later implementation he used log Gabor wavelets, as
introduced by Field (1987). The advantage of the log Gabor transform when used for the radial
filtering is that it is Gaussian on a logarithmic scale and thus has better high-frequency
characteristics than the traditional Gabor transform (Cook et al., 2006).

November 2018
HRS 10.6 Volume Attributes Guide 284

A flow chart for this method is shown in Figure A8.2. This basic algorithm is extended by using
weighting and noise thresholding terms (for details, see Kovesi, 2003).

Figure A8.2. A flowchart illustrating the basic steps in the phase congruency algorithm.

As can be seen in Figure A8.2, the first step in the 2D phase congruency algorithm is to
transform the data to the 2D Fourier domain. We then compute and apply N .M filters (N radial
log Gabor filters multiplied by M angular filters). The log Gabor filters are computed over N
scales S, where S = 0, … , N-1. Typically, the value of N is between 4 and 8. Each log Gabor
filter is computed by the formula

 − ln( r ⋅ λS ) 2 
log GaborS = exp   ⋅ lp , (A8.2)
 σ 

where r = the radius from zero frequency, λS = 3mS is the scale value, with a default value of m =
2.1, σ = 2ln(0.55)2 is the width of the log Gaussian function and lp is a low-pass 2D Butterworth
filter. The angular filters are created over M orientation angles θ, where θ = 0, π/M, …, (M-

November 2018
HRS 10.6 Volume Attributes Guide 285

1)π/M. The default value of M is 6, in which case the angles will range from 0o to 150o in
increments of 30o.

After the N.M filters are applied, each filtered image is transformed back to the spatial domain
and, after appropriate weighting and noise thresholding, is summed over the scales to produce an
image at each orientation. These images are then analyzed using moment analysis, which is
equivalent to performing singular value decomposition on the phase congruency covariance
matrix (Kovesi, 2003).

The maximum moment M and minimum moment m are computed as follows:

M =
1
2
(
c + a + b 2 + (a − c) 2 , ) (A8.3)

m=
1
2
(
c + a − b 2 + (a − c) 2 , ) (A8.4)

θM θM θM
where a = ∑ (PC (θ ) cos θ ) , b = 2θ∑θ [(PC (θ ) cos θ )(PC (θ ) sin θ )], and c = θ∑θ (PC (θ ) sin θ )
θ θ
=
2

= =
2
.
0 0 0

The magnitude of the maximum moment indicates the significance of an edge feature on the
image. The magnitude of the minimum moment gives an indication of a corner on the image. In
this study, we will display only the maximum moment M, since we are interested in edge
features.

A Simple Example

Let us now apply phase congruency to a simple example. Since the original algorithm was
developed to identify features on a 2D photograph, we used an image consisting of a cylinder of
amplitude 2 superimposed on a cube of amplitude 1 and a background of amplitude 0, shown in
perspective view in Figure A8.3(a) and map view in Figure A8.3(b). The amplitude-coded map
in Figure A8.3(b) is presented to the phase congruency algorithm.

November 2018
HRS 10.6 Volume Attributes Guide 286

(a) (b)

Figure A8.3. A simple test designed for the phase congruency algorithm, consisting of a cylinder of amplitude 2 intersecting a
cube of amplitude 1, on a background of amplitude 0. The perspective view is in (a) and the map view is in (b).

Figure A8.4(a) next shows the perspective view of the real component of the 2D Fourier
transform of the image, and Figure A8.4(b) shows the same display in map view. After the 2D
Fourier transform, the first step is the creation of the filters. We used four scales (with m = 2.1)
and six orientations.

(a) (b)

Figure A8.4. The real component of the 2D Fourier transform of the image shown in Figure A8.3, where (a) shows the
perspective view of the transform and (b) shows the map view.

November 2018
HRS 10.6 Volume Attributes Guide 287

Figure A8.5 shows the perspective plots of the four log Gabor filters, from S = 0 to S = 3. As
expected, the filters get more “spiky” as the scale increases.

Figure A8.5. Perspective views of the four log Gabor filters used in the phase congruency analysis of the image shown in Figure
3, for scales from 0 to 3, where m = 2.1 in Equation 2.

The map views of the four filters are shown in Figure A8.6. Again, the notice that the filters get
more “spiky” as the scale increases.

November 2018
HRS 10.6 Volume Attributes Guide 288

Figure A8.6. Map views of the four log Gabor filters used in the phase congruency analysis of the image shown in Figure 3, for
scales from 0 to 3, where m = 2.1 in Equation 2.

Next, the map views of the six angular filters are shown in figure A8.7. (The perspective views
are not shown.) Finally, figure A8.8 shows the 24 individual filters in groups of six
corresponding to the angle range for each scale. Again, these are only shown in map view.

The filters shown in Figure A8.8 were then applied to the 2D Fourier transform of the image
shown in Figure A8.4 and their inverse transforms are computed. We then stack these inverse
transforms over the scales for each angle and apply moment analysis. The final result is shown
in Figure A8.9(b), with the initial image shown in Figure A8.9(a) for reference. Notice how well
the edges of the two structures have been defined. Of course, the example just shown is
extremely simple and was used only to illustrate the algorithm. In the following sections, we
will therefore apply the phase congruency algorithm to seismic data slices.

November 2018
HRS 10.6 Volume Attributes Guide 289

Figure A8.7. Map views of the six angular filters used in the phase congruency analysis of the image shown in Figure A8.5(b),
for angles from 0 to 150o.

(a) (b)

November 2018
HRS 10.6 Volume Attributes Guide 290

(c) d)
Figure A8.8. Map views of all 24 combinations of radial and angular filters applied in this example, where (a) shows the filters at
all angles for the first scale (Scale0), (b) shows the filters at all angles for the second scale (Scale1), (c) shows the filters at all
angles for the third scale (Scale2), and (d) shows the filters at all angles for the last scale (Scale3).

Figure A8.9. Map views of the (a) original image consisting of a cylinder and a cube from figure A8.5(b), and A8.5(b) the final
analysis using phase congruency. Notice the clear definition of the edges of the two objects.

Implementation on 3D seismic volumes

A schematic diagram showing the way in which the phase congruency method was implemented
on seismic data is shown in Figure A8.10. Although this algorithm proceeds by analyzing
constant time slices, it should also be possible to apply the algorithm to structural, or
stratigraphic, slices. We will next implement the phase congruency algorithm on several seismic
examples. The first example will be a karst collapse study from the Boonsville area of north
Texas, and the second example will be from a fractured carbonate reservoir in Alberta.

November 2018
HRS 10.6 Volume Attributes Guide 291

Figure A8.10. A schematic showing the implementation of the phase congruency algorithm to seismic data.

Karst collapse case study

We will first analyze a 3D dataset from Boonsville, Texas. The wells and 3D seismic from this
dataset are in the public domain, and are available from the Bureau of Economic Geology at the
University of Texas. The geology of the area and exploration objectives of the Boonsville
dataset have been fully described by Hardage et al. (1996) and a map of the area taken from that
paper is shown in Figure A8.11.

November 2018
HRS 10.6 Volume Attributes Guide 292
Figure A8.11. A map showing the location of the Boonsville gas field. (Hardage et al., 1996).

In the Boonsville gas field, production is from the Bend conglomerate, a middle Pennsylvanian
clastic deposited in a fluvio-deltaic environment (Hardage et al.,1996). Figure A8.12 shows
inline 141 from the 3D seismic volume over the gas field. An event close to the top of the Bend
formation (the Davis) is shown at approximately 930 ms, and the base of the Bend formation is
indicated by the third picked event at 1050 ms, the Vineyard. The Bend formation is underlain
by Paleozoic carbonates, the deepest being the Ellenburger Group of Ordovician age. The
Ellenburger contains numerous karst collapse features which extend up to 760 m from basement
through the Bend conglomerate (Hardage et al., 1996).

As can be seen in Figure A8.12, these karst collapse features, illustrated by the vertical ellipses,
have a significant effect on the basal Vineyard event and continue vertically almost until the
Davis event. Hardage et al. (1996) demonstrate, using measured pressure data, that these karst
collapse features affect reservoir compartmentalization within the producing Bend formation.
The karst collapse features are also evident when we look at the time structure map from the
Davis horizon (the top picked event on the seismic section in Figure A8.12), as shown in Figure
A8.13. The circular anomalies on this structure map clearly show these karst collapse features.
Also shown is the position of inline 141 from Figure A8.12, which cuts two of the anomalies.

Figure A8.12. Inline 141 from the Boonsville survey, where the red ellipses indicate the karst structures. The picked events are,
from shallow to deep, Davis (green), Runaway (blue) and Vineyard (red).

November 2018
HRS 10.6 Volume Attributes Guide 293

Figure A8.13. A time structure map of the Davis formation over the Boonsville survey. The circular anomalies represent karst
structures and the horizontal red line is the position of inline 141, shown in Figure A8.12.

We will now try to identify these karst collapse features using both the phase congruency and
coherency methods. Figure A8.14 shows a set of composite slices (in the X, Y and Z directions)
over the 3D seismic survey illustrated by the white outline in Figures A8.13(a) and (b), where
Figure A8.14(a) shows the original seismic survey and Figure 14(b) shows the phase congruency
results. On Figure A8.14(a) the Y-direction, or in-line, slice shows the karst features quite
clearly (they are annotated with the red ellipses) but on the horizontal time slice they are not as
clear. On Figure A8.14(b), the in-line slice shows the karst features more clearly than on the
seismic display (again, they are annotated with the red ellipses) and on the horizontal time slice
they are also much clearer.

(a) (b)

November 2018
HRS 10.6 Volume Attributes Guide 294
Figure A8.14. A vertical slice showing karst features superimposed on a horizontal slice at 1080 ms, roughly halfway through the
karst collapse, where (a) shows the seismic volume and (b) shows the phase congruency volume. The red ellipses on both figures
illustrate the vertical collapse features.

Figure A8.15 shows the same set of composite slices (in the X, Y and Z directions) as in Figure
A8.14, where Figure A8.15(a) again shows the original seismic survey and Figure 15(b) now
shows the coherency results. On Figure A8.15(b), the in-line slice shows the karst features more
clearly than on the seismic display (again, they are annotated with the red ellipses), but in a
different way than the phase coherency result of Figure A8.14(b).

(a) (b)
Figure A8.15. A vertical slice showing karst features superimposed on a horizontal slice at 1080 ms, roughly halfway through the
karst collapse, where (a) shows the seismic volume and (b) shows the coherency volume. As in Figure 15, the red ellipses on both
figures illustrate the vertical collapse features.

Fractured carbonate case study

Our second case study comes from a fractured carbonate reservoir in Alberta. The exact location
of this reservoir cannot be revealed for reasons of confidentiality. Figure A8.16 shows a time
slice of phase congruency through the main producing interval in the reservoir. There are two
things to note on this time slice. First, the producing wells are shown as green circles on the
slice. Notice the higher density of wells in the lower portion (southern part) of the map. In fact,
there are only two producing wells in the top portion (northern part). Second, notice the high
density of fractures in the southern part of the map that correspond very well to the high
production. The fractures in the southern part of the map are aligned along a dominant east-west
trend. Conversely, notice the lower density of fractures in the northern part of the map that
correspond to lower production. Also, the fractures in the northern part of the map appear to be
in conjugate sets, running both north-south and east-west. It is obvious from this map that the
phase congruency algorithm has been able to identify fracture patterns that correspond to
carbonate production.

November 2018
HRS 10.6 Volume Attributes Guide 295

Figure A8.16. A time slice of phase coherency within the reservoir interval of a carbonate reservoir in Alberta, where the green
circles represent producing wells.

Next, Figure A8.17 shows a vertical seismic section on the left along the result of analysis with
three different algorithms: an un-named contractor section which attempts to display fracture
density, phase congruency, and curvature. (Curvature is an attribute that we have not discussed
in this paper. For more details on curvature, see Roberts (2001)). On the three computed
fracture plots an FMI, or Formation MicroImager, log curve has been superimposed, showing the
density of fractures. Note that this log does not correlate well with the commercial product, but
shows good correlation with both the phase congruency and curvature plots. In particular, the
green colour indicates large values of both phase congruency and curvature in both plots and
corresponds to large values of fracture density on the FMI log.

November 2018
HRS 10.6 Volume Attributes Guide 296

Figure A8.17. The seismic section on the left and, from left to right: a contractor section that computes fracture density, phase
congruency and curvature. Superimposed on the three computed sections on the right is an FMI, or Formation MicroImager, log
that measures fracture density.

Finally, Figure A8.18 shows a plot of initial production (in cubic metres) versus amplitude of
phase congruency, computed over six wells. As can be seen in this plot, there is a roughly linear
trend between initial production and the amplitude of the phase congruency. In other words,
higher phase congruency correlates with more fractures and more fractures correlates with more
production.

Figure A8.18. A crossplot of initial production from the carbonate reservoir with phase congruency amplitude.

November 2018
HRS 10.6 Volume Attributes Guide 297

Conclusion

In this Appendix, we described the application of the phase congruency approach for identifying
discontinuities on seismic data slices. As discussed, phase congruency has found application in
the identification of features on photographic images and is used in image processing for robot
vision. However, the method had seen little application to seismic data. We first described the
theory of the phase congruency method, and then illustrated the method on a simple image
consisting of an overlapping cylinder and cube. Finally, we applied the algorithm to two seismic
examples, comparing the phase congruency results with other seismic techniques such as
coherency and curvature.

In our first seismic example, a karst collapse study from the Boonsville field in Texas, we found
that phase congruency did a good job in identifying these karst features. From an economic
standpoint, the identification of the karst features was of great interest since it lead to the
identification of compartmentalization within the reservoir interval above the karst collapse
zones (Hardage et al., 1996).

Our second seismic example was a fractured carbonate reservoir from Alberta. We found that the
phase congruency method was able to identify the areas in the field in which maximum
fracturing had occurred. These areas of high fracturing in turn correlated with the highest initial
production values in the field. When amplitude of phase congruency was plotted against initial
production, a good correlation was found.

In this study, we found that other attributes such as coherency and curvature also performed
well. However, the phase congruency method when applied to seismic slices gives a new and
different seismic discontinuity attribute, one that can add value to ongoing seismic exploration
and production efforts.

November 2018

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy