En RadExPro 2024.1 Manual
En RadExPro 2024.1 Manual
1
User Manual
(Revised 04.04.2024)
RADEXPRO EUROPE OÜ
Järvevana tee 9-40
11314 Tallinn, ESTONIA
Visiting address:
26, S. Tsintsadze st.
Tbilisi, Georgia
E-mails:
support@radexpro.com
sales@radexpro.com
Table of Contents
Table of Contents ...................................................................................................................................... 2
Introduction ............................................................................................................................................. 13
Database Object Selection Window: Saving and Opening Project Data Objects ............................. 101
Geometry Spreadsheet – spreadsheet editor for trace header values ................................................ 103
SEG-D Input (Rev. 3) (Input of disk files in SEG-D rev 3.0, 3.1 format) .................................... 144
Correlation Statics (calculation of automatic and correlation-based static corrections) ............... 558
Picking (It is old module. We recommend you to use Statics Correction module) ...................... 569
VSP SDC (Vertical Seismic Profiling Spherical Divergence Correction for nonconverted reflected
compressional waves) ................................................................................................................... 766
The software operates on standard Windows-driven PCs and does not make any distinctive demands on
specific hardware or require any additional software.
The RadExPro is available in several configurations united by a common user-friendly graphic interface
but different in the set of embedded processing modules:
Start - basic processing of 2D onland near-surface seismic reflection data, refraction seismic
data processing, analysis of surface wave (MASW);
Professional - advanced processing of 2D/3D offshore and onland near-surface reflection data,
processing of refraction seismic data, analysis of surface waves (MASW), infield processing and QC of
deep 2D/3D seismic data, processing of VSP/DAS VSP data.
This manual describes the RadExPro Professional configuration. If you have Start configuration of
the software, be aware that some features described in this manual are not present in your configuration.
If you have Real-Time configuration, an additional documentation on the RT facilities will be provided
upon request. You can check the content of the configurations with our support team at
support@radexpro.com
System Requirements
Minimum:
Recommended minimum:
A Project Manager window will open – create a new project there, select it in the list of registered
projects and click the OK to start working with it.
The list of Registered projects contains 3 columns: Project name, Date created and Date modified.
Click on the header of any column to make the list content sorted accordingly.
Right-click on the list area to sea the context menu. You can Sort items there as well: beside the options
available through the column headers there is one extra sorting option in the menu – As added. If
seltected, it will resort projects in the order as you were adding them to the list (the latest added project
will be at the bottom of the list):
Another command of the context menu will Open project folder in a new Windows Explorer
window.
The Project directory string at the bottom of the window contains full path to the folder of the currently
selected project. You can select the string contents with the mouse and copy it to clipboard.
• New project... allows creation of a new, empty project, the name of which will appear in the
list of registered projects
• Select project... allows adding an already existing project into the list of registered projects.
• Remove from list button removes the selected project from the registered project list (the
project itself remains on your hard disk).
• Save list... button saves the list of registered projects into the text file.
• Load list... button loads the list of registered projects from a text file.
When required project is selected, click the OK button. To leave without saving changes, click
Cancel.
Project Management
What is a project?
All data processing is performed within the framework of a processing project. A project is a set of
different types of data, their geometry, as well as processing flows applied to the data. Each project is
kept in a separate directory on a hard disk.
Area,
Line,
Flow.
Area consists of one or more lines, each line, in turn, is made up of flows.
The objects of a project’s data can contain seismic data, borehole data (wells), time picks (picks),
velocity picks, grids and bitmap images. Each data object of the project is associated with this or that
level of the database, attached to a specific area, or a specific line, or a specific flow. Although
RadExPro provides for working with external data files, in most cases work with a data file should start
from registration in the project database.
For example, you perform the processing of the data, acquired near illage along 3 lines. You may create
a new project (see the section “Creation of new project”), add an area “Unnamed”, and inside the area
you can create 3 lines: “Profile1”, ”Profile2”, and “Profile3” (see the chapter “Working with a project”,
in the paragraph “Working with a project tree”).
Next, you have to load data into the project. In the given case the data represent the set of field
seismograms for each line (the data loading is examined in the chapter “Working with a project”, in the
paragraph “Registration of the data files in the project”). Seismograms of the 1st line should be tied to
the level “Profile1”, the 2nd line tied to the level “Profile2” etc.
After loading the data, you may create several processing flows for each line (see the chapter “Working
with a project”, in the paragraph “Working with flows”) and run the tasks.
You may need to save intermediate processing results. You can do this by creating new datasets inside
the project, linked to any level of a project.
You may exit the program and during the next run you can open a project created earlier with area,
lines, and attached seismograms and processing flows (see the paragraph “Opening an existing
project”). You can modify the project, add new lines or flows, change the parameters of procedures in
the flows created earlier etc. All your work is well organized and stored in one place. You can look
through the project to find out what had been done earlier and add something if you need to do so. After
you have finished the processing, you may need to extract the processing results from the
RadExPro project and save them as a file to a disk in one of the standard formats. There are special
modules for data output to external files (see the chapter “processing modules”, in the paragraph “Data
I/O”).
Creating a new project
The process of new project creation starts with determination of a directory on disk, where all data
objects of the project as well as its database files will be stored.
In the RadExPro Project Manager select a New project... button. A dialog window for project folder
selection will appear.
Select a directory and click the OK button. If the OK button is not active, it means that the selected
folder already contains another RadExPro project.
After the directory for a new project has been selected the New database window will appear. Enter the
name of the new project in the Title field. If the Create subfolder option is activated, the subdirectory
with the name specified in the Title field will be created in the selected directory. Otherwise, a new
project will be created directly in the selected directory.
After all parameters have been specified, click the OK button. A new project will be created in the
specified directory. In this case the RadExPro program will create five project database files in the
directory:
• data.fbl
• struct.fbl
• data.fsp
• struct.fsp
• atoms.tbl
The name of a new project will appear in the registered project list (Registered projects) of the
RadExPro Project Manager window.
Select the created project by clicking the left mouse button (MB1) (in this case, in the Project directory
field the information about the project path will be displayed) and click the OK button.
Opening an existing project
In order to open a project it should first be added to the registered project list (Registered projects) in
the RadExPro Project Manager window.
If the required project is already on the list, you can simply select it with the left mouse button (MB1)
(in this case, in the Project directory field the information about project path will be displayed) and
click the OK button.
If the project already exists on the hard disk but its name is not displayed in the registered project list,
select the Select project... button. The window for project directory selection will open.
Select the directory in which the project is stored and click the OK button. If the OK button is not active,
then it means that the directory selected does not contain a RadExPro project.
If the project directory was successfully selected then its name will appear in the registered project list
(Registered projects). Double click on the project name (or select it and click the OK button) to open
it. When a new project is open for the first time, a new Area, Line and Flow will be created automatically
and displayed on the Project Tree panel on the left of the main program window. It is recommended that
you rename them, to give them some sensible and informative names:
Admin Mode
In the Admin Mode, you can restrict access to the specified flows as well as block part of the functions
to those project participants who do not have a password. It will protect the project from non-coordinated
changes in the structure or processing parameters, as well as from unauthorized data transfer.
To enable the Admin Mode, open the Option tab located on the top panel and click on the Admin Mode...
In Password lock for the following export actions, select the functions to be accessed by entering a
password:
You can also enable unlock by control question. To do this, check the Allow unlock by control question
checkbox, select the control question, and enter the answer to it:
Now, the inscription Admin Mode will appear next to the project name which indicates that you are
now working in the project in the Admin Mode.
If you are working in the Admin Mode, there will be the inscription Admin mode after the name of the project.
In this mode, you can restrict access to the above-mentioned flows. To do this, right-click on the flow
and select Lock flow:
In order to block access to the flow, right-click on it and select "Lock flow".
The blocked flow will be displayed in the flow list with a lock icon:
NOTE: In order to unlock a flow in the Admin Mode, you do not need to enter the password.
You can exit the Admin Mode either in the Option tab by left-clicking on the Admin Mode or simply
by closing the project.
After exiting the Admin Mode, the inscription next to the project name will change to Password
Protected Mode. This means that the project is now password protected and access to some parts of the
project and export functions is restricted. It is the mode in which third-party users will work with the
project:
If you are in the Password Protected Mode, you need to enter a password each time to access the flows and
functions that are locked by the administrator.
In order to unlock the flow in the Password Protected Mode (right-click on the flow → Unlock flow),
you need to enter the password:
If you enter an incorrect password, a dialog box will appear in which you will be asked to answer a
control question:
If you enter an incorrect password, a window will appear that will offer you to answer a control question
Dialog box for answering a control question
After the user enters a correct password or answer to the control question, the flow will become available
to the user:
NOTE: Even after unlocking the mode, you will need to enter the password to perform export
operations.
If you are working in the Password Protected Mode, you need to enter the password each time to
access the export functions.
Working with Projects
Starting Work with a Project
After creating a new project (or opening an existing one), the main window of the application will be
opened. The window title shows the name of the current project.
• The Application Menu (more information is available in the The Main Application Window
Menu section);
• Processing window (more information is available in the Scratch file section)
• Database Navigator
• The status bar indicates hints on working with objects in the Working Window.
The Main Application Window Menu
This menu contains the following submenus:
• Database
• Options
• Tools
• Help.
The Database submenu features a number of commands for working with the project database:
• Save... saves the current database structure into a special file with the *.dbs extens
• Load... loads the previously saved database structure from a file with the *.dbs extension;
• Geometry Spreadsheet... allows viewing and editing the values of the header fields;
• Dataset history... allows viewing the processing history of a dataset (a set of seismic data)
registered in the project;
• DXF export... exports objects into the DXF format. This menu item allows saving time picks in
the DXF format for further processing in GIS and CAD systems (more information about this is
available in the DXF Export section);
• Edit headers fields list... allows editing the list of headers, i.e., editing and deleting existing ones
as well as adding new ones;
• Exit exits the RadExPro program.
• Pick matching accuracy: this option allows you to specify the accuracy with which the
program will match the values of the pick keys and the corresponding trace headers of a
floating-point type (Real and Real8 headers), to which the pick will be binded.
Operation principle
First, the algorithm checks the keys of the loaded pick and the trace headers for a complete match.
If a complete match for a pair of keys is found, then the pick point is binded to the trace, and the
algorithm proceeds to the next pick point. If a full match is not found for any trace, then the
interval [Key_value - Accuracy; Key_value + Accuracy] is determined for the trace headers. The
pick point will be binded to the first trace in the sort, where both pick keys fall within the specified
intervals.
The Tools submenu contains additional data processing tools:
• Edit queues...: editing the queues (more information about this is available in the
• 3D CDP binning...: binning of three-dimensional CMP seismic data (more information about this
is available in the 3D CDP Binning section);
• HVT→VVT...: conversion of a horizontal velocity model into the vertical one (more information
about this is available in the Horizontal-to-Vertical Speed Law Converter (HVT→VVT))
section.
In the Windows menu section, you can enable/disable the display of Project tree, Processing Flow,
Flow status, All Modules, and Actions panels of the Working Window.
The Help → About menu item displays the current version of the RadExPro software.
Scratch files.
Scratch files is an option that allows you to store temporary files of processed projects in any
directory specified by the user. This makes it possible, for example, to use SSD-drives to store scratch
files, which greatly speeds up the modules operation.
The essence of operation: by default, all scratch files are stored in the folder with the current project.
To set the location for files storage, you need to go through the following path "Option" →
"Scratch folder ..." in the main program window.
"External scratch files option" operation window. A checkmark next to "Use global setting" means that the
current project will use the shared directory for scratch files, which is the same for all projects. It can be
changed using the tab "Edit global settings…". A checkmark next to "Use project settings" means that the
current project will use the local scratch files storage directory: either inside the project folder or the one
specified by the user.
Parameters:
Use global setting – the parameter is responsible for the use of the shared directory by the current project
for scratch files storage. The change of directory will affect all the projects which use «Use global
settings». In order to edit the address, you need to click on the "Edit global setting" button:
The window for changing the general directory for scratch files storage.
By default, the shared directory is not set, and the scratch files are saved to the project folder. To set it,
mark "Use custom scratch folder" with a checkmark, specify the path to the folder, and then in Window
1 select "Use custom scratch folder".
Now any project with the "Use global setting" settings will use the specified directory to store scratch
files.
Use project setting – the parameter is responsible for using the current local (individual for this project)
scratch files storage directory. If the field is marked, a file with the .ini extension will be created in the
folder with the current project, in which the path to the scratch files will be specified. By default, the
scratch files are stored in the folder with the project. To change the directory, select "Use custom
scratch folder" and specify a new path where the current project will store the scratch files.
ATTENTION! When the project is loaded, the current settings of Scratch files are checked. If no
specified directory exist for the project, it is reset to the default settings:
1. If the parameter "Use global setting" is set in the project settings, but there is no shared
directory, the parameters automatically change to "Use project setting" and "Store scratch file
into default folder inside project".
2. If the parameter "Use project setting" is set in the project settings, but there is no local directory,
the parameters "Use global setting" are automatically checked. If there is a shared directory in
these settings, or the "Store scratch file into default folder inside project" parameter is set,
these parameters are used for the current project. Otherwise, use "Use project setting" and
"Store scratch file into default folder inside project".
Processing Tab
The Processing tab allows:
• managing the project structure (creation of Areas, Lines, and Processing Flows, as well as their
deletion, copying, and renaming);
• launching the implementation of Processing Flows;
• tracking the status of Processing Flows;
• viewing the logs of implemented Processing Flows;
• combining multiple Flows into queues and launching them in a predetermined sequence.
The tab consists of five panels: Project tree, Processing Flow, Flow status, All Modules, and Actions.
The visibility of these panels can be switched on or off separately from the Windows menu.
Working with a Project Tree
A project tree is a graphical representation of the data organization in a project. A project has a three-
level hierarchical structure: Area, Line, and Flow. Each Area may include one or more Lines, each of
which in turn can include one or more Processing Flows.
When selecting an existing project, the Project tree panel displays the previously constructed tree of
Areas, Lines, and Flows, which can be edited. The names at each level are sorted alphabetically. The
panel for an existing project looks similar to the example below.
When creating a new project, the project tree will look as follows:
When creating a new project, the "Area 1" Area, the "Line 1" Line, and the "Flow 1" Flow are
automatically created. We recommend renaming the automatically created database project levels
immediately by assigning them meaningful and informative names. Areas, Lines, and Processing Flows
can be changed, copied, or deleted; and their properties can be determined.
When working with a project tree, the status bar displays hints on using the mouse and the keyboard:
• MB1 on a Flow: left-click (MB1) once on the name of the Area/Line/Flow to select the
corresponding level of the project tree. To select multiple objects at the same time, hold the Ctrl
or Shift keyboard key;
• MB2: right-click (MB2) once on the name of the Area, Line, or Flow to display a context menu
pertaining to this level;
• MB1 and drag: press and hold the left mouse button (MB1) on the name of the Flow and drag the
pointer to the name of a Line to copy the Flow to the Line. Lines and Areas can be copied in a
similar way.
To create a new Area, right-click (MB2) anywhere below the last Processing Flow to open the context
menu and select Create Area....
To cancel creating a new element, click the cross symbol or the Cancel button to the right of the line
with its name:
For the convenience of project navigation, the elements of sublevels can be expanded and collapsed.
Once the first Flow has been created, the corresponding Line will display a " " icon next to its name;
the same icon will appear next to the name of the Area. To hide all the Flows of a Line, click the arrow
next to the name of the Line. Clicking on the icon near the Area hides all the existing Lines of the Area.
In this case, the arrow will change its form to . Re-clicking on the arrow will display the hidden
elements again.
The functions of the arrow are also performed by the buttons on top of the project tree panel (the right
arrow hides all the elements of the Area, and the left arrow redisplays them): . Lines/Flows
can also be collapsed using another method: place the mouse pointer at, and double-click (MB1), a
parent element.
Copying Lines/Flows
The copying function makes handling projects easier: it is not necessary to recreate the project tree
structure over and over again, adding all the required modules. Any elements can be copied: Areas,
Lines, and Flows. This procedure can be carried out even between two different projects. If multiple
items are selected, the operation will be applied to each of them.
Copying a project tree element can be done in any of the following three ways:
• Right-click (MB2) on the copied object and select Copy in the context menu. Then move the
pointer to, and right-click (MB2) on, a parent element and select Paste. The name of the new
element will appear in the editing window, where it can be changed;
• Drag the selected element by holding down the left mouse button (MB1) to a parent project tree
element and release the button. If copying the selected element to the desired location is
impossible, the icon will appear next to it; otherwise, the icon will appear;
• Copy/paste with the help of "hot keys": select the object to copy by clicking on it with the left
mouse button (MB1) and then press Ctrl + C. Then, select a parent object and press Ctrl + V or
use the context menu.
Removing Areas/Lines/Flows
In the course of work, it may become necessary to remove a project tree element. Areas/Lines/Flows
can be removed in two ways:
• right-click (MB2) on the object to be deleted and select Delete in the context menu.
• select the element and press the Delete key on your keyboard.
This will open an information warning window where you will be required to confirm or to reverse your
decision.
PLEASE NOTE: Even though the deleted project tree element can be recovered by using the Undo
command (Ctrl + Z or via the Actions panel), all the associated objects (datasets, picks, etc.) cannot be
recovered!
To change the name of a project tree element, open up the context menu by right-clicking (MB2) on the
object, and select Rename.
This will open an editing window in the project tree, where you can change the name of the
element.
To cancel creating a new element, click the cross symbol or the Cancel button to the right of the line
with its name:
Display Settings
To change the display settings (font, text color, etc.) of the Project tree panel, open the context menu
by right-clicking (MB2) on the corresponding element.
Clicking in a blank space on the panel below the tree allows changing:
• Background : the Background… context menu item;
• Text color: the Text color… context menu item;
• Font: the Font… context menu item;
• Reset fonts and color: the Reset fonts and colors… context menu item.
More information about display settings can be found in the Customizing the Processing Project
Appearance section.
Editing Processing Flows
Such an arrangement offers a number of advantages in the structuring of data processing because it
allows storing the processing history, makes it possible to return to any step and to repeat the entire
process while changing any settings, or to use the already built Processing Flow with another data set.
The processing sequence (graph or Flow) is a set of processing modules whose inputs and outputs are
connected in some way. RadExPro has two modes of presentation of Flows: simple (text) and complex
(GUI). In the simple (text) mode, a Processing Flow is presented as a list of processing modules so that
the output of each module is connected to the input of the following one.
In the complex (GUI) mode, processing modules are presented in the form of images, and the
connections between them are presented in the form of arrows. Inputs and outputs of processing modules
are connected by arrows manually; at the same time, modules may have multiple inputs and outputs, and
the Processing Flow may branch out and/or form cycles. This representation of a Flow provides more
options for the organization of data processing. Because of this, the Flows that have been created in the
graphic mode may not always be represented in the text mode.
Each Flow begins with one of the data input module from the Data I/O section. Usually (for the data
that has already been loaded into the project), the above module is Trace Input. The data input module
initializes the Flow and allows selecting the data to be processed. The data entry module can occur
multiple times within a single Flow, which can be useful, for example, for:
• combining multiple data sets for joint processing;
• processing a data set in various ways with subsequent joint visualization by the Screen Display
module or merging into a single output file.
The above modules are followed by data editing and processing modules: correction of the amplitudes,
filtering, summation, etc. Each Flow normally ends with a module to record data to the disk (for example,
Trace Output) and/or an interactive application (for example, Screen Display). The result of each
processing procedure/module can be configured by changing the module parameters.
All the processing units can be found in the All Modules panel and are grouped by section. For example,
Data I/O, Signal Processing, Interactive Tools, etc. To expand or to collapse a section, click on the
arrows on the left side of the name of the group or on the buttons on the top panel of the window.
NOTE: Modules marked with an * symbol (an asterisk) are Stand Alone modules; they must be the only
active modules used in a Processing Flow.
Switching Between Flow Editing Modes
When creating a new Processing Flow, the text presentation mode is selected. To switch to the graphical
mode, click the button on the top panel of the Working Window; the sequence of the procedures
will be the same as in the Flow in the text presentation mode. It may be necessary to adjust the spatial
position of the module icons manually for a better visual perception of the Flow.
If the Flow is presented graphically but contains no branching and loops, it may be represented in the
text mode. To switch such a Flow to the text mode, click the button on the top panel of the Working
Window.
Select the desired Flow in the project tree using the left mouse button. The Processing Flow panel will
display a set of modules that have been used to create the Flow.
A processing module can be added into the current Flow using one of the following methods:
• Drag it using the left mouse button (MB1) from the processing module library (All Modules) to
the Flow edit window;
• Using the Auto Search function (this function works if the mouse pointer is above the Project
Tree, Processing Flow, and All Modules panels), start typing the name of the module on the
keyboard. A window will appear that contains a list of modules which feature the entered text in
their title. In the list that appears, select the necessary module and add it to the Flow by double-
clicking (MB1) or using the Enter key;
• Double-click on the name of the necessary module in the processing module library (All Modules).
Adding a new module to the Flow opens its Settings dialog box. Its appearance depends on the selected
module.
PLEASE NOTE: Modules marked with an * symbol (an asterisk) are Stand Alone modules; they must
be the only active modules used in a Processing Flow.
In graphic mode, the Flow in the edit window is presented as a graphical image. Each module that is
included in the Flow is represented by a white rectangle with the module name and logo inside.
Inputs and outputs of modules are to be connected manually by means of arrows. Connecting the
modules requires moving the mouse pointer to the first module, right-clicking (MB2) on it, and holding
the right mouse button while moving the pointer to the next module, and finally releasing the mouse
button.
If several arrows have been connected to the same input of a module, the data obtained from them will
be combined (according to the order of creation of such arrows). If several arrows are connected to the
same output of a module, all data that “exit” the module will go through each of these arrows.
Construction of a Flow in the graphic mode consists of adding modules to the Flow and connecting these
modules to each other by means of arrows. Subsequently, modules can be added to or temporarily
excluded from a Flow, as well as copied to and pasted from the clipboard.
A processing module can be added into the current Flow using one of the following methods:
• Drag it using the left mouse button (MB1) from the processing module library (All Modules) to
the Flow edit window;
• Using the Auto Search function (this function works if the mouse pointer is above the Project
Tree, Processing Flow, and All Modules panels), start typing the name of the module on the
keyboard. A window will appear that contains a list of modules which feature the entered text in
their title. In the list that appears, select the necessary module and add it to the Flow by double-
clicking the left mouse button (MB1) or using the Enter key;
• Double-click on the name of the necessary module in the processing module library (All Modules).
Adding a new module to the Flow opens its Settings dialog box. Its appearance depends on the selected
module.
To change the parameters of a module that has already been included in a Flow, double-click with the
left mouse button (MB1) on its image in the Flow, which will open its Settings dialog box.
To move the module image, drag it while holding the left mouse button (MB1) down.
Managing Modules in a Flow
The methods for copying, deleting, selecting, and commenting on modules in a Flow that are used in the
text mode do not differ from those used in the graphical mode.
It should be noted that the possibility to select several modules in a Flow has been stipulated for the
user's convenience (the same effect will apply to these modules): it is only necessary to select them while
holding down the Ctrl key.
All the actions with the project are displayed chronologically in the Actions panel window. If necessary,
returning to a certain project status is possible by selecting this status in the Actions panel.
Also, changes can be undone/redone with the use of the following hotkeys combinations: Ctrl + Z
(undo) and Ctrl + Y (redo).
You can save the Flow to a file and later load it into a new Flow so as not to re-create "basic" Flows
from a project to the next one. In order to save a Flow to a file, right-click your mouse (MB2) in an
empty field in the main window. Select Save Flow... in the menu that appears. This will open the Save
window, where you need to select the directory and the file name. The file extension is *.rff.
In order to load the Flow from a file, right-click your mouse (MB2) in an empty field in the main window.
In the menu that appears, select Load Flow... and select the file in the correct directory.
NOTE: If you are loading a Flow from a file into the Flow, and the latter already has a set of modules,
a warning message will be displayed that all the previous modules will be removed when you select the
Load Flow... command.
To switch the mode, click the button at the top of the Processing Flow panel. When the framed
To configure the framed mode parameters, click on the arrow to the right of the button and set parameters
in the opened window.
The desired frame size (expressed in the number of traces) is determined by the Frame width parameter.
The checkbox next to Honor ensemble boundaries prevents a division of traces within an ensemble. If
it is on, the frame will always be supplemented to the boundary of the last included ensemble.
In addition, the Honor ensemble boundaries option can be used for the purposes of reliable processing
of each input ensemble separately (for example, viewing strictly by one ensemble in the Screen Display
module). To do this, turn on the Honor ensemble boundaries option and specify the desired size of the
frame in traces as 1. In this mode, the program will read the first trace and complement it to the ensemble
boundary when forming each frame.
By default, the normal mode is activated (the button is not pressed); its input and output are completely
controlled by the parameters of the relevant modules. This mode can be used to load data of one or more
sets (or files) into the Flow, to process them and to write the result into a new dataset or, for example, a
SEG-Y file. This is the most often used mode.
To enable batch mode, click the button. In this case, a window of settings of this mode will
automatically appear if the list of input data is empty. The same settings can be accessed by clicking on
the arrow to the right of the Batch-mode button and selecting Edit. The Flow will be started for each
file from the list separately for as many times as there are files or sets of data in the batch.
Depending on the selection, either a standard Windows file selection dialog or a RadExPro database
object selection dialog will open when the Add files/Add datasets button is pressed.
NOTE: The type of objects in the batch definition list must match the input module used in the Flow.
Thus, if Trace Input is placed at the beginning of the Flow, the batch must comprise datasets; if SEG-
Y Input or SEG-D Input is placed at the beginning of the Flow, the batch must comprise files of the
appropriate format.
Once objects have been added to the batch, the buttons on the right side of the dialog box with up and
down arrows can be used to change the position of the selected object in the list. The button with a
crossed red circle removes the object from the list.
To save the name of the file or the dataset in the header, place a check mark next to Save file/dataset
index to header.
NOTE: When working in the batch mode, it is necessary to specify in the used input-output modules
that the names of the files/datasets must be selected not from module parameters, but from the batch.
For this purpose, the modules that support this mode contain a special checkbox: From batch list. The
following is an example of data entry dialogue box of SEG-Y-format files with the activated batch mode:
At the moment, the following modules support batch input and output:
Input:
• Trace Input
• SEG-Y Input
• SEG-D Input
• SEG-2 Input
• SEG-D Input (Rev. 3)
Output
• Trace Output
• SEG-Y Output
• Header Output
Stand-alone:
• Marine Geometry Input
• Resort
• Tides import
• Plotting
SRME
• 2D SRME Interpolation
• 2D SRME Geometry Return
The dialog boxes of output modules that support the batch mode contain a checkbox From batch list
(which must also be enabled in this mode) and a button for setting parameters of the batch output: Batch
output settings. The following is an example dialog box for the Trace Output module settings that has
been prepared to work in the batch mode:
Clicking on the Batch output settings will open the output parameter settings dialog box (it is the same
for all modules):
Since the Flow is performed for each object from the batch list separately, the name under which the
processing result is saved is generated automatically each time on the basis of the name of the current
input object. At the same time, the suffix defined in the Suffix to add line is added to the end of the
original name of the object (the default suffix is _res).
The storage path for the objects is determined in the Output path field.
If the Same as input checkbox is enabled, the processing result is stored at the location of the read of
current input object.
Otherwise, it is necessary to press Select path... and to specify explicitly the storage path for the results:
in Trace Output, this will be the project database path; in the other output modules, the folder in the
computer's file system.
NOTE: It should be kept in mind that the use of the input path (the Save as input option has been
enabled) to save the results is only possible when the type of input and output objects coincide (a file at
the input and a file at the output or a dataset at the input and a dataset at the output). Otherwise (a file on
the input and a dataset on the output or vice versa), the program will generate an error at the Flow
performance stage unless the path has been specified manually.
Finally, one must define what has to be done if the same name already exists in the chosen path (When
file with the same name exists):
• Overwrite: the object will be overwritten (without alerting the user);
• Skip: the recording of such an object will be canceled (without alerting the user); the existing
object will remain in the chosen path;
• Save both: both objects will be saved. The name of the new object will be appended with the serial
number of the version.
The Flow status panel at the bottom of the window displays information about the launch of the Flow:
The Flow status panel displays information about the Flow implementation status (the procedure being
performed and the percentage of its completion). After clicking on the "Run this Flow" button, a tab with
the name of the Flow will appear in the upper part of the window (this tab can be closed by clicking on
the cross symbol). The Flow implementation status is located under each tab. Switching between these
tabs can be done with the use of the left mouse button (MB1).
At the moment, RadExPro features several interactive modules that involve user interaction during their
execution (for example, Screen Display, VSP Display, etc.). The interactive module exists in a separate
window and can be regarded as an independently performed task by the user.
The user can exit the interactive module to the RadExPro Working Window by MS Windows means
and to continue to work in the Working Window. At the same time, the Flow that comprises an
interactive module is still in progress, and an interactive module window is available for visualization
and activation. Thus, the user can launch multiple versions of the same Flow and compare the results on
the screen.
Interactive modules are completed either as provided by the module interface (in this case, the Flow
continues up to its end) or upon the interruption of the Flow or at the end of the entire program (the
closing of the Main Working Window).
When running more than one Flow, keep in mind that each running Flow uses computer memory (mostly
for storing the read data); therefore, a limited number of Flows can be run simultaneously.
Terminating a Flow
To terminate a Flow, click the button on the Flow status panel on the tab of the
corresponding Flow. Then, confirm the action in the opened dialog box.
Working with Logs
The program features the Logging functionality for the Flow execution process. As the Flow is
executred, the resulting notifications are logged as a text file (a Log). If necessary, the file can be viewed
with any text editor. It is especially useful to review the log file when an error occurs in the course of
execution of the Flow because logs can clarify the cause of the failure and/or determine the stage at
which the failure occurred. The logging mode is optional and can be turned off if desired. The program
features a special dialog box that can be used to configure logging parameters. The editing mode can
also be enabled in this dialog box.
The editing dialog box for logging parameters can be opened from the main program menu by means of
the Options → Logging... command. This will open the parameter assignment dialog box (Logging
parameters).
Parameters
Enable Logging: this parameter is used to enable/disable the logging mode.
Log content: this parameter is used to determine the types of messages to be written to the log file:
• Status: duplication of messages that appear in the Flow Status panel;
• Report: Flow progress reports that are specifically generated for the log;
• Debug: notifications of occurrence of an error;
Log size limit: this parameter is used to specify the set of parameters that define the allowable size of
the directory with log files;
• Track log size: this parameter is used to enable/disable the tracking mode for the size of the
directory with log files;
• Maximum log size (MBytes): this parameter is used to specify the maximum size of the
directory with log files expressed in megabytes;
• Action on reaching log size limit: this parameter is used to determine the action to be
performed when the directory with log files reaches its maximum size:
▪ Automatically delete old logs: automatically delete the contents of the directory with
log files;
▪ Warn on project loading: when loading a project, post an alert on the screen that the
maximum size of the directory with log files has been reached.
Viewing Logs
Press the button on the top Processing Flow panel to view the logs. The drop-down list will
display all the logs related to the current Flow in a chronological order. Double-clicking on the selected
log file will open it in the MS Windows application associated with the *.txt extension (the default
application: Notepad).
The program offers the ability to launch flows in separate, independent queues, and you can configure
the number of these queues individually. These Queues run in parallel, which can significantly reduce
the overall processing time when multiple CPUs or cores are available.
Once you've selected a queue number, the system will open the "Queues" dialog window. You can also
access this window by clicking the button on the top Project tree panel or by selecting the
corresponding option from the main menu (Tools → Edit queues…).
The upper section of the window displays numbered queue tabs (you can add more queues by clicking
the "+" sign).
The current queue editing window displays a list of active Flows that have been added to this queue,
along with their status. The Flow status indicates the stage of the procedures being executed:
Queued Flows are processed sequentially, from top to bottom. You can change the processing order by
dragging a Flow to a new position using the mouse. To remove a Flow from a queue, select it with the
mouse and press the "Delete" key on your keyboard.
The lower part of the window provides information about the queues, including the queue number, the
start time of the queue, the number of the queued Flow currently being processed, and the task's status:
To stop the execution of Flows in a specific queue, click the "Stop" button.
The right-hand part of the window includes a list of Flow procedures and management buttons:
- Run this queue: Initiates the processing of the current queue.
- Run all queues: Starts the processing of all available queues.
- Delete finished queues: Removes completed queues from the queue status list.
Replica system
Introduction
Replicas are instances of the same flow distinguished by datasets that are supplied to the flow input, as
well as by parameters of modules. The datasets and module parameters for each replica are defined by
variables that are given in a special table (replica table or a variable table).
Instead, you just need to carefully fill in the variable table (or the replica table) once and prepare the
flow pattern in accordance with the required syntax.
ATTENTION: We highly recommend that you master the replica tool, since it can significantly
automate the processing workflow.
Task: from existing data, assign profile number for each trace to which it belongs and upload data to a
separate folder.
2) Create a template flow containing Seg-Y Input, Trace Header Math and Seg-Y Output
modules using the variables from clause 1) and a special syntax:
A replica table (or a variable table) is a database object of the same level as Datasets, Pick, and Velocity
pick. You can create and view the replica table in Database Navigator.
To do this, enable the replica display mode (gear icon), Toggle replicas, on Database Navigator tab
After that, right click on the list of database objects, select "New replica" from the drop-down menu,
specify the name for the table and open the created table by double-clicking the left mouse button:
Enter the name of the variable into "Name" window and variable type in "Type" drop-down list (Real,
Int64 or String). After that, click Add button and add the variable to the table. There will be a column
with a variable in the table:
For example:
"Count" parameter is responsible for the number of new rows that will be inserted into the column, and
"Position" parameter indicates where new rows will be added: before or after the selected row. After
the parameters are set, click on "Insert" button and the new rows will be added.
Editing
If the variable has a numerical value that changes from row to row incrementally, it is possible to fill in
its value automatically. To do this, specify the initial value of the variable ("Start value" parameter)
and step between its values ("Step" parameter). Then select the range of values you want to fill in with
the mouse and click "Fill" button.
Thus, the necessary number of variables can be added to the table, as shown in figure below:
Completed replica table. Each replica has the corresponding row (highlighted in different colours),
One replica corresponds to each row in the table. This means that at first all the template flow will be
executed only for the variables specified in the first row, then — for variables in the second row, etc.
See section "Creating a Flow Template" to create a template flow and use variables in it.
• Open... button allows you to open a previously created variable table. To do this, it's necessary
to specify the path to it in RadExPro software package database.
• Save button overwrites the table taking into account current changes in the database.
• Save As... button makes it possible to save the variable table under a different name in the
database without overwriting the previous version.
• Export button allows you to upload a variable table in .txt format from a database to the specified
hard disk location.
• Import button allows you to load a variable table into a database from a file saved in .txt format.
In addition to importing a replica created earlier and having a form that is difficult for the user to read,
it is now possible to import a file that supports both ASCII and Unicod in UTF-8 format.
ATTENTION! The first line in the imported file should contain the names of the variables (Area,
Profile, Number_of_shots, Operator) that you want to add to the replica table. The program accepts Latin
characters, underscores and numbers.
Text file import settings window
When you click on the Import button and select a file to upload, a window will appear where you
need to select the type of separator and the range of lines to be loaded:
• Tab
• Space
• Other – another type of separator (;:\,). You can write them one after another without
separating characters, since the program accepts unique characters
• Join Separators
• From line – specify the string from which the import will be performed
• To Line - specify the last line to import
And also indicate how you want to add the variable table: Overwrite - overwrite the existing replica
table, Append - add to the existing one (in this case, the total number of rows will be equal to the
maximum of the two available and will be supplemented with Header No Value).
When adding replicas, the variable type in the uploaded file is automatically recognized, and if there is
an empty line in the variable, the value is Not Assigned
Replica table before importing additional variables
A flow pattern is a sequence of modules within which the variables specified in the replica table are
used. If at least one module uses variables, then the entire flow automatically becomes a template one.
The template remains the same throughout the flow execution.
ATTENTION!
@ symbol and a variable name should always be fused together. If you enter a space after @, this part
of the expression will be considered as plain text, there will be no variable substitution.
If you need to set a specific numeric or text format for a variable, you can do this using extended
syntax.
For example:
{@file_no, 06d} – six-digit integer; missing high-order digits are replaced with zeros (lines of the form:
"000001", "000002", ..., "000123", ..., etc.)
{@first_channel_offset, 6.2f} – six-digit real number, 2 decimal places, missing high-order digits are
replaced with spaces (lines of the form: "1.00", "2.50", "123.32", ..., etc.)
Format specifiers are described in more detail at the end of the section.
ATTENTION!
If a variable is used in the parameters of a module in a flow, the module becomes a template and is
marked with a T letter icon.
If at least one module becomes a template in a flow, the whole flow becomes the template. The template
flow is also marked with a T letter icon.
Sequence of template flows (left) and template modules (right). If variables are used in the module, it
automatically becomes a template marked with a T letter icon. If there is at least one template module in a flow,
the flow automatically becomes the template.
Example 2. Template flow creation and replica system use when processing
seismic data.
Input data:
There is a large database of 2D profiles. The path to the folder containing data for each profile is as
follows:
Data\SEGD\Seq003_BLACK_SEA5102836\2836
Data\SEGD\Seq005_BLACK_SEA15102812\2812
Names of the folders from which the data will be uploaded. Based on their name, we will compile a variable
table to automate loading data process.
There are more than 100 files in each folder, each file corresponds to one separate shot. Example:
Names of SEG-D files corresponding to one of uploaded profiles. The red rectangle shows the directory that
will be entered into "SEG-D Input" module using variables.
Also there are P1-90 format files with geometry for the specified profiles that are in "Data\P1-90"
directory.
Task:
Considering names of the folders in which the profile data is stored, we create the AREA, SEQNUM
and LINE variables, since they are used when naming SEG-D (data files) and P1-90 (geometry files),
which will help us automate uploading. These variables will be used in input and output modules:
Masks can contain straight text, variables from a replica table, "*" and "?" wildcards as well as ranges.
<a,b|d> entry is also possible. In this case, numbers are printed in a format in accordance with d specifier.
Only integer-valued specifiers are allowed.
Now here, in the dialog box, you need to use variables to enter a file path to specified files, as shown in
the figure below:
Data\SEGD\Seq{@SEQNUM,03d}_{@AREA}{@LINE}*\{@LINE}*\<{@SOL_SHOT},{@EOL_
SHOT}>.sgd
Its first part "Data\SEGD\...." is responsible for the path to the folder with data.
Next, there is a part of the path entered with the help of variables.
«Seq{@SEQNUM,03d}_{@AREA}{@LINE}*» — specifies name of the folder with data. The figure
shows the correlation between the variables in the table and the names of the folders from which the data
will be uploaded.
Make a point of a "03d" format specifier separated by commas from @SEQNUM variable. It is the
specifier which shows the algorithm that "3", "5", "7" values from the variable table should be read as
"003", "005" and "007".
Also note "*" icon located after @LINE variable. It shows the algorithm that symbols after variable
are also need to be read. Consider figure: in the last "Seq050_BLACK_SEA1900l" folder the letter "l"
is placed after the profile number. If there is no "*" symbol, the algorithm doesn't read the data from this
folder, as if it tries to find the "Data\SEGD\Seq050_BLACK_SEA1900\" directory without "l" letter
which does not exist.
Next, there is "…\ {@LINE}*\..." variable in the entry line. It designates a subfolder which name is
the numerical value of the profile number and can also be characterized by the @LINE variable. It
contains SEG-D files of each profile.
«…\<{@SOL_SHOT},{@EOL_SHOT}>.sgd »
The name of each file is a number, so the indicated range designates all SEG-D files which names fall
into it.
We denoted first and last representative SEG-D files with the help of @SOL_SHOT and
@EOL_SHOT variables. It is the range to be used for data load.
Thus, we completed dialog box in SEG-D Input module. Now, when the module is run, it will read the
variable table, and for each replica (a line in the variable table), files in the directory specified by
variables in the dialog box will be loaded. In other words, 1 replica means 1 loaded profile.
Step 2. Dealing with data.
ATTENTION!
Used modules are just one example of use of replica system which functionality is much broader.
Let's say that we need to delete bad shots and assign headers for each load data trace.
Use of variables in "Data Filter" module to delete bad shots from dataset.
Use of variables in "Trace header Math" module to assign values to S_LINE, SEQ_NUM and ACQ_S_LINE
headers.
After all necessary operations with profiles are carried out, we unload the data with the help of "Trace
Output" module into the directory specified using variables:
GOM\010 DataInput\{@SEQNUM,03d}_{@AREA}{@LINE}
Data upload using "Trace Output" module. The path is also entered with the help of variables, which makes it
possible to enter any names for unloaded datasets.
To run it, you need to press "Run" button, then you will be prompted to select a variable table from the
database and mark its lines in respect of which the flow will be executed:
For this template, 4 datasets to be displayed in Database will be replica execution result:
For this stage, a separate flow consisting of the following modules will be created:
1) Trace Input – data load. To speedup the process, we load only the headers from the dataset. To
do this, in "Trace Input" window check "Load headers only".
2) Import P1-90 – loading geometry files
3) Header↔Dataset Transfer – overwriting of headers from the flow into database thus
avoidingduplication of the same dataset in the database.
In "Trace Input" module, using "Mask" tool and variable syntax enter the path to the files uploaded in
step 3 (as shown in step 1)
Path to data files uploaded in step 3.
The same will be done in "Import P1-90" module taking into account names of .p190 geometry files:
Uploading geometry files using "Import UKOOA P1-90" module and variables.
Rewriting of Geometry table from the current flow to the database using "Data Transfer" module
Thus, we created the second template flow assigning geometry for each replica.
After that, you will be prompted to select a variable table in respect of which the specified 2 flows will
be executed. Next, standard RadExPro queue dialog box appears.
Selecting replicas created in in steps 1–4 for processing before running template flows
End of Example 2.
Task: unload SEG-Y files from the existing dataset so that a separate SEG-Y file corresponds to each
of 15 shot points (SP).
Step 2. Creation of template flow containing "Trace Input" and "Seg-Y Output" modules:
Trace Input module. {@FFID} variable in "Selection" window
Seg-Y Output module. {@FFID} variable is included in the name of the saved files
Step 3. Run the flow.
Obviously, in this case, use of replicas significantly reduced file upload time.
APPENDIX
Number format specifiers:
1) Only "0" and " " (space) are supported as a filler for numbers. Space is used by default.
2) d means output of an integer. If a variable is of a real type, then its value is rounded to the nearest
integer (0.5 → 1).
3) f means the output of a real number with a fixed number of decimal places; for example, 6.2f
format means that the output number will always have 2 decimal places.
ATTENTION! If you do not specify the number of decimal places, it will be equal to 0.
7) If the output line length is not important, use entry of .3f, .3e type
Admin Mode
In the Admin Mode, you can restrict access to the specified flows as well as block part of the functions
to those project participants who do not have a password. It will protect the project from non-coordinated
changes in the structure or processing parameters, as well as from unauthorized data transfer.
To enable the Admin Mode, open the Option tab located on the top panel and click on the Admin
Mode...
In Password lock for the following export actions, select the functions to be accessed by entering a
password:
You can also enable unlock by control question. To do this, check the Allow unlock by control question
checkbox, select the control question, and enter the answer to it:
Now, the inscription Admin Mode will appear next to the project name which indicates that you are
now working in the project in the Admin Mode.
If you are working in the Admin Mode, there will be the inscription Admin mode after the name of the project.
In this mode, you can restrict access to the above-mentioned flows. To do this, right-click on the flow
and select Lock flow:
In order to block access to the flow, right-click on it and select "Lock flow".
The blocked flow will be displayed in the flow list with a lock icon:
NOTE: In order to unlock a flow in the Admin Mode, you do not need to enter the password.
You can exit the Admin Mode either in the Option tab by left-clicking on the Admin Mode or simply
by closing the project.
After exiting the Admin Mode, the inscription next to the project name will change to Password
Protected Mode. This means that the project is now password protected and access to some parts of the
project and export functions is restricted. It is the mode in which third-party users will work with the
project:
If you are in the Password Protected Mode, you need to enter a password each time to access the flows and
functions that are locked by the administrator.
In order to unlock the flow in the Password Protected Mode (right-click on the flow → Unlock flow),
you need to enter the password:
If you enter an incorrect password, a dialog box will appear in which you will be asked to answer a
control question:
If you enter an incorrect password, a window will appear that will offer you to answer a control question
Dialog box for answering a control question
After the user enters a correct password or answer to the control question, the flow will become available
to the user:
NOTE: Even after unlocking the mode, you will need to enter the password to perform export
operations.
If you are working in the Password Protected Mode, you need to enter the password each time to
access the export functions.
Database Navigator Tab
The Database Navigator tab is designed to manage database objects and allows
viewing lists of objects of different types (datasets, picks, velocity laws, etc.) together with additional
information about them. Here, various operations on objects can also be performed: any object can be
renamed and deleted; the contents of datasets and trace headers can be viewed quickly; picks can be
imported/exported; and so on.
Database Navigator provides a quick and easy access to all the data types that are associated with the
project (more information is available in the Object Types section).
When a Flow, a Line, or an Area is selected in the project tree (by clicking), a list of objects that belong
to the selected project level is displayed in the table. Objects that belong to all the sublevels of the
selected level (for example, all Flows of the selected Line) can also be optionally included in the list: to
do this, put a check mark in the box, which is located on the top bar of the
Working Window above the project tree.
For the convenience of working with the project tree, the Navigator features buttons that allow
collapsing all the levels of the project tree ( ) or expanding them ( ) at once.
Select the object type (or multiple types) whose details will be displayed in the table to the right of the
project tree. This can be done using the following buttons:
– picks;
– vertical velocity picks;
– wells;
– QC ranges.
Information about each button pops up when the mouse pointer is moved over it. The pressed button is
Table columns depend on the selected object type. If multiple types have been selected at the same time,
the table will only show common columns (for example, Name, Location, etc.).
To bring up the menu for a particular object, right-click (MB2) on it. The list of operations that can be
performed with the object through the menu depends on the type of the object. A detailed description of
the features associated with each of them is provided in the Object Types section. Objects of all types
can be renamed (Rename) and deleted (Delete). The look of the table (colors and fonts) can also be
customized via the context menu.
If a Processing Flow that creates a new database object or modifies an existing one is performed in
parallel when the Database Navigator tab is opened, a situation may arise in which there is no sufficient
time to update the information in the table. To force an update of the information in the table, click the
Refresh button in the upper right corner of the Working Window.
Object Types
The Database Navigator table can display the following types of objects: sets of seismic data (datasets),
picks, vertical velocity picks, wells, layered velocity picks, and QC ranges.
All of these types have two common columns in the table: Name (the name of the set of seismic data)
and Location (the location of data).
Seismic Data
The table provides for the following specific columns in order to display information on sets of seismic
data:
• Trace count: this column indicates the total number of traces;
• Sorted by: this column indicates sorting parameters;
• Created: this column indicates the date of creation of the data set;
• Modified: this column indicates the date of changes to the dataset.
• Size, MB: this column indicates the physical size of the data on the hard drive expressed in MB.
The context menu, which can be opened by right-clicking, allows performing the following specific
operations:
• Geometry spreadsheet...: this operation allows viewing seismic data headers
• Quick view (2-D)...: this operation allows quickly viewing data in Screen Display with the default
display settings or setting that have been selected for this set;
• Quick view (2-D) options...: this operation allows selecting the following Quick View options for
the selected dataset: the frame size and the display options;
• History...: this operation allows viewing the processing history.
• Empty – remove all the traces from a dataset, an empty dataset remains in database and is
displayed greyed.
• Export… - exports a dataset to a disk file in RadExPro data exchange format.
• Import dataset… - imports a dataset from a disk file in RadExPro data exchange format.
Picks
The table provides for the following specific columns in order to display information on picks:
• Length: this column indicates the number of points in the pick;
• Headers: this column indicates the headers associated with the pick;
• Style: this column indicates the style of the pick: line thickness, color.
Via the object context menu which can be opened by right-clicking, a pick may be exported to a text file
(Export...), imported from a text file (Import pick...) or edited (Edit…).
The table provides for the following specific columns in order to display information on velocity picks:
Via the object context menu that can be opened by right-clicking, a velocity pick may be exported to a
text file (Export...), imported from a text file (Import velocity pick...) and edited (Edit…)in a simple
spreadsheet window.
Wells
The table provides for the following specific columns in order to display information on wells:
• Title: this column indicates the name of the well;
• X coordinate: this column indicates the X coordinate;
• Y coordinate: this column indicates the Y coordinate;
• Latitude: this column indicates the latitude;
• Longitude: this column indicates the longitude;
• Altitude: this column indicates the altitude;
• Depth: this column indicates the depth;
• Incline: this column indicates the inclination angle of the well.
In order to create a well, right-click (MB2) outside the table and select the Create well... menu item.
This will open a Settings window with three tabs: Passport, Data files, Log data.
The Passport tab is designed to specify general data on well:
The fields in this tab should be filled in by the user. They mean the following:
The Data files tab is designed for loading of data on wells from files:
By clicking one of these buttons you can open the dialog box for selection of a file from which the data
will be loaded. (see information about file formats in appendix).
When loading the time-depth curve (Time/Depth) and layer model (Lay model), in the respective
Depth field, specify the type of depth that should be taken from file: true depth (True) or cable depth
(Cabel).
If the file was loaded successfully then in the field to the right of the button the name of loaded file is
displayed and in the Defined field which is opposite to the file name the checkmark justifying that the
well possesses this type of data is put.
The Log Data tab serves to load the well log data.
To the left the list of log curves loaded now is shown. In the brackets next to the name of a certain curve
the units of measurement are displayed.
With the help of the Import from las, Rename and Delete buttons you can import new log curves from
files, rename curves and delete the loaded curve, respectively.
To add a new curve to the list, use the Import from las button which opens the dialog box for LAS file
selection. When selecting a file of supported format the following dialog will appear:
In the right part of the window there is a list of all log curves available in the specified file. In the left
part of the window there is a list of log curves that should be added.
By " and " buttons you can choose the log curves that should be loaded to database.
Besides, you indicate in what column the depth values are recorded in the file. To do this, select the
required curve (that represents depth) in the list at the left side and then click the Depth button. After
that, the information about what column to be treated as depth will be displayed in the bottom part of
the dialog box in the Treat as Depth: field.
After you fill in all the tabs and click the OK button, the standard dialog box of database object save
will be displayed. In this dialog box you can adjust the path in the database and the name for the new
database element.
No specific operations have been provided for this type of objects in the context menu.
QC Ranges
The table provides for the specific Color column in order to display information on QC ranges.
No specific operations have been provided for this type of objects in the context menu.
Database object Map polygon.
Map polygons are database objects defining a segment in the area in coordinates (X:Y).
Using context menu, you can open the table to edit map polygon angle points:
Insert – add polygon angle point. Number of new angle points is regulated by row(s) parameter, and
location for insertion of new lines (higher of lower than the highlighted line) is regulated by current
row parameter.
You can copy values from Excel tables, as well as save an opened table into a text file using Save to…
button.Click Apply to save changes in the table.
Working with Objects
The following represents the common actions that can be performed with objects in the Database
Navigator.
Right-click (MB2) on the object and select Rename in the opened context menu. The table will display
the Edit Window where the name of the object can be changed.
Deleting an object:
right-click (MB2) on the object and select Delete in the opened context menu; alternatively, select the
object with the mouse and press Delete on the keyboard. A pop-up alert window will open, in which it
is necessary to confirm the decision:
Customizing Table Columns
The table has specific information sections for each type of data (columns) that can be hidden or made
visible. For example, the following columns are provided for seismic data:
To hide a column, right-click on it (MB2) and uncheck the corresponding option in the opened context
menu.
The table provides for sorting the objects by columns. To do this, double left-click (MB1) on the
necessary column header.
Customizing the Processing Project Appearance
For the convenience of working in the application, the following attributes can be changed in almost all
the panels of the Working Window: style, size, font color, and background color. Right-click (MB2) on
most elements of the interface in order to change:
• Set color: the Set color… context menu item;
• Text color: the Text color… context menu item;
• Clear color: the Clear color… context menu item.
• Background : the Background… context menu item;
• Text color: the Text color… context menu item;
• Font: the Font… context menu item;
• Reset fonts and color – the Reset fonts and colors… context menu item.
In addition, the project tree allows specifying individual colors for particular levels of the project. In
order to do that, click on the needed level of the base on the Processing tab in the project tree and select
Set color... in the context menu. (To reset the color settings, select the Clear color menu item.). In this
case, the tabs of the Flow status panel will be colored in the color of the flow. To disable this option,
uncheck the box next to Reflect colors in the flow status tabs.
Database Object Selection Window: Saving and Opening Project
Data Objects
The Database Object Selection window in the RadExPro project is a standard tool for working with
database objects similar to the fact that the Open/Save File dialog box is a standard tool for working with
files in the Windows operating system.
The Database Object Selection window is displayed almost every time that an existing database object
of the RadExPro project is selected or that a new object is saved in the database.
The title of the Database Object Selection window comprises a request to the user and depends on the
task being performed.
Below is an example of a Database Object Selection window during creation of a new dataset in the
project:
In the left part of the dialog box, the project tree is located, and the current section is highlighted. To the
right, a table with objects is located. In general, the window behavior repeats the Database Navigator
tab with all its features except that the type of objects displayed in the table cannot be changed because
it has been pre-defined.
The Object(s) line, which displays the name of the selected object, is located on top.
If the dialog box is opened to create a new object, the corresponding line becomes editable: type in the
name under which the new object will be registered in the project. After choosing a name and clicking
OK, the object will be created in the current section of the project database. To create an object in another
section of the database, change the current section.
Similarly, first select the section that includes the necessary object in the project tree in the Database
Object Selection window when opening an existing database object and select the object in the table to
the right.
Likewise on the Database Navigator tab, it is possible to use the Show objects from sublevels
checkbox to show not only the objects of the current level in the table, but also objects from all of its
sublevels.
Geometry Spreadsheet – spreadsheet editor for trace header
values
The Geometry Spreadsheet application is designed for visualizing and editing header field values from
a dataset within the project, presented in spreadsheet format.
Choose the method that suits your preference to launch the application.
Using the Geometry Spreadsheet editor
The work window consists of the following elements: a main menu at the top, a toolbar on the right, and
a table for viewing and editing headers.
Main menu
The main menu includes the following options:
• Save Changes - Saves all changes made to the fields during the application's operation.
• Undo All Changes - Reverts all changes made.
Tools - These commands enable various manipulations with headers and the import/export of data:
• Header Math - Performs mathematical operations with the header values (refer to the section
titled "Trace Header Math").
• Trace Enumerator - Opens the tool for specifying continuous trace numbering, sorted by two
header fields (see the section titled "Trace Enumerator Tool").
• Export... - Exports the content of the spreadsheet to a file in ASCII format.
• Import... - Imports a file in ASCII format. You need to specify the text rows and columns to
import into the spreadsheet (refer to the section titled "Import of Files in ASCII Format").
• Import SPS X... - Imports SPS-X files (see the section titled "Import of SPS-X Files").
• Import UKOOA p1-90... - Imports UKOOA p1-90 files.
• Show Statistics - Displays statistics about the selected cells, including average, minimum,
maximum, and more.
• Fill selected block... — Activates group fill bar to fill the selected block of cells with new
values. Enter the starting value in the "Start from" field, specify the increment in the "Step" field
and click the “Fill” button.
Toolbar
The Toolbar consists of two tabs:
• This bar appears when you click the toolbar button on the right.
• The Header Addition Bar is a list of headers where items can be selected.
• You can select elements from this list and drag them into the table using the left mouse
button. This action adds columns corresponding to the dragged headers to the table.
• Headers that are already displayed in the table are not included in the list.
• The "Assigned Fields Only" checkbox controls which headers are displayed in the list
for addition to the table. When checked, only headers with assigned values in the
seismic dataset currently under consideration are shown. When unchecked, all project
headers are displayed.
• If the selected headers are not present in the dataset, they are shown in the table as "Not
Assigned."
Header Addition Bar.
• The Data Filtering Bar becomes visible when you click the toolbar button on the right.
• To add headers for filtering, choose the header name from the drop-down list and click the "Add"
button.
• After doing so, a new line with the header name, along with input fields for entering initial and
final values, will be added to the filter list.
• You can use "Inf" (∞), "+ Inf" (+∞), "-Inf" (-∞), and "NA" as valid initial and final values. For
"NA," it should be entered into both input fields.
• To remove a header from the filter, click the "Cross" button.
Table of Headers
The following buttons and hot keys are used during the editing of the header spreadsheet:
• Single Left Mouse Button Click (MB1) on a header column selects the entire column. To select
multiple columns, choose them individually while holding the Ctrl button (Ctrl+MB1).
• Double Left Mouse Button Click (MB1) on a header column sorts the values in the column in
ascending order.
• Double Click (MB1) on a cell allows for text editing.
• Single Click (MB1) on a cell selects the cell; use the Shift key (Shift+MB1) for multi-selecting
cells.
• The selected values can be copied to the clipboard using the Ctrl+C keyboard shortcut.
• Select cells and use Ctrl+V to paste values from the clipboard into the selected cells. If the size
of the cells in the clipboard exceeds the selected block you're pasting into, only a portion of the
clipboard values will be inserted.
• Ins (Insert): Fills the selected block with new values. Enter the starting value in the "Start from"
field, specify the increment in the "Step" field and click the “Fill” button.
The "Trace Header Math" module, designed for performing mathematical operations on header
values, is activated through the command located in the Tools/Header Math... menu section of the
geometry table editor window.
The module's description can be found in the paragraph titled "Processing modules" →
"Geometry/Headers" → "Trace Header Math" in the current manual. The key distinction of this
tool in Geometry Spreasdsheet from the module is that mathematical operations are applied to the
headers in the table immediately after clicking the OK button in the parameters window.
Import of ASCII files
When the Tools/Import command is selected from the menu of the Geometry Spreadsheet window
the Import headers dialog box appears:
Import parameters
File... - opens a standard window for ASCII spreadsheet file selection from which the values will be
imported.
Matching Fields - displays the header fields against which existing values will be compared with the
values from the selected column during the import process. This comparison is used to map the columns
selected in the Assign Fields section to their corresponding header fields.
Assign Fields - presents the names of the headers to which the values imported from the columns will
be assigned.
The correlation between Matching Fields and Assign Fields is illustrated in the following example: If
you configure the parameters as depicted in the figure above, then in all traces with a field value RECNO
equal to 40, the REC_X field values will be replaced by 413291.0. Similarly, in traces with a field value
RECNO equal to 41, the REC_Х field values will be replaced by 413289.0, and so on. In other words,
the settings in the Matching Fields section establish a matching criterion. Upon satisfying this matching
criterion, the import process is executed according to the settings specified in the Assign Fields section.
Both the Matching Fields and Assign Fields sections feature the following buttons::
• Add... - This button allows you to add a header name from the headers list into the respective
list.
• Delete - Use this button to remove a header name from the list.
The following parameters must be set individually for every header in the lists of Matching Fields and
Assign Fields fields:
Column - This represents the number of the column in the text file that corresponds to the selected
header. To modify the association between headers and columns, follow these steps:
Multiplier - The data from the selected column will be multiplied by this value during the matching
process with values of the selected header (in the Matching Fields case) or during importation into the
selected header (in the Assign Fields case).
There is also a group of parameters called "Lines" where you should define the range of lines in the text
file. Depending on the settings in the Matching Fields and Assign Fields fields, the imported values
will be obtained exclusively from within the specified line range. To set the starting line number, left-
click (MB1) on the desired line in the text file content window, and then click the "From" button. The
number of the selected line will appear in the "From" field. Similarly, you can specify the ending line
number using the "To" button. Additionally, you can manually enter line numbers in their respective
fields (please note that line numbering in a file starts with 0).
The "Save Template" and "Load Template" buttons are intended for saving the current import
parameters as a template within the project database and for loading import parameters from a previously
saved template, respectively.
The "Load with 1D (2D) Interpolation" option enables the import of a file with interpolation. If there
are missing values in any part of the file, they will be linearly interpolated using the header(s) field
specified in the Matching Fields block. To set the header(s), select it in the "Reference Field"
(Reference Field X, Reference Field Y) option.
Extrapolate - This option is used to extrapolate values that are missing at the beginning or end of the
file. There are two sub-options:
• No Extrapolation - No extrapolation will be performed.
• Use Edge Values - Missing values will be filled with the last available value present in the file.
Additionally, for the "Extrapolate" option, you can perform extrapolation using a specified trend. The
trend is calculated based on the number of points selected in the "Points to Estimate Trend" field
When you click the OK button, the following message will appear:
Recommended procedure:
1. First, click the "File..." button to select an ASCII spreadsheet file from which the values will be
imported. The file's content will be displayed in the Import Headers dialog box.
2. Next, specify the range of lines from which the values will be imported. To do this, follow these
steps:
• Select the first line to be imported by clicking the left mouse button, and then click the
"From" button in the "Lines" group. The line number of the first import will appear in the
"From" field.
• Click the left mouse button on the last line you wish to import, and click the "To" button
in the "Lines" group. The line number of the last imported line will be displayed in the
"To" field. (Use the scroll bar for selection if necessary).
• Alternatively, you can manually enter line numbers in their corresponding fields (please
note that line numbering in a file starts with 0).
• If you intend to import all lines from the file, leave the “From” and “To” fields at their
default values of zero.
3. Use the "Add..." button to select headers from the list for both the "Assign Fields" and "Matching
Fields."
4. Link the selected headers with columns in the file. Follow these steps:
• Click the left mouse button (MB1) on the header name in the "Assign Fields" and
"Matching Fields" sections.
• Click on the column with the displayed text that corresponds to the selected header.
• Afterward, click the "Column" button, and the column number will appear in the
"Column" field. Additionally, set the "Multiplier" value for the selected header.
5. Once all parameters are set, click the "OK" button. The import of values will be executed.
6. If there are missing values in any column of the file, a warning message with the position of the
missing values will appear.
Trace Enumerator tool
A tool dialog box appears when you select the "Tools/Trace Enumerator" command from the menu.
This dialog box enables you to sort traces based on two header fields and then assign a continuous
numbering index to them, with the values entered in the third header field.
Parameters
Sorting:
The progression parameters for continuous numbering – Value and Increment – are configured in the
Progression field, where you also select the Target header field to which the acquired values will be
written.
Example of usage:
For example, if the data contains two traces with the same FFID value but differing TRACENO values,
and you apply the Tool with the parameters depicted in the figure above, the values of the CHAN field
will be:
1 1 0
1 2 1
2 1 3
2 2 4
Show Statistics
When the Tools/Show Statistics menu command is selected, the following values are computed for
the range selected in the table:
This command can also be invoked by right-clicking on the selected table fragment.
Import of SPS-X files
Recommended procedure:
1. Select the SPS-X File: Click the "File..." button to choose the SPS-X file. The file's content will
appear in the lower part of the window.
2. Specify Lines to Load: In the "Lines" section of the parameters, indicate the lines from the file you
want to load. Enter the starting line number in the "From" field and the ending line number in the "To"
field. Alternatively, place the mouse cursor on the desired line and click either the "From" or "To"
button; this will automatically fill in the corresponding line number.
3. Choose Line Separation Mode: Select the mode for separating lines into separate fields under
"Text table type":
- Fixed Width by Positions: Fields within each line will have distinct positions relative to the start of
the line and a fixed width.
4. Set Column-Header Matching: Configure the matching between columns (positions) in the file
and the header fields where the data will be loaded. This matching is achieved using the "Field record
number" and "Channel" fields.
- Begin by determining the "Field record number" that corresponds to the line you are loading.
- Next, use the "From channel," "To channel," and "Channel increment" fields to determine the
channel numbers for this record number.
- Additionally, values from the file will be loaded into the specified headers for each trace. These
headers include Source line, Source station location, Receiver line (pairs record number –
channel), as described by the given line.
- Similarly, the value for Receiver station location is loaded for each trace based on the information
in the fields First receiver station location, Last receiver station location, and Receiver station
location increment. Be sure to specify the name of a specific trace header that matches each field.
The name of each field on the list in the upper-left part of the window is displayed under the
"Definition of Field" column. The current header names corresponding to these fields can be seen on
the right side of the list under the "Header Name" column. To modify the header for any field, you
can either double-click on its name with the left mouse button or select it and then use the "Change
header" button.
To establish the column (position) in the file that corresponds to a field, follow these steps:
1. Select the desired field from the list using the mouse.
2. In Delimited mode, specify the column number or position the mouse cursor over the
target column and then click the "Set column" button.
3. In Fixed width mode, provide the values for both the first and last positions in the
respective entry fields. Alternatively, select the required range using the mouse and then
click the "Set pos" button. The currently selected range will be highlighted in red, while
the range being edited will appear in blue (the intersection of these ranges will be
displayed in green).
The "Chanel increment" and "Receiver station location increment" fields can be populated using
two methods: either by extracting data from the file (using "From file" and specifying the column
positions) or by manual entry (selecting "Manual" and entering the value in the respective entry field).
After configuring all the necessary parameters, click the "OK" button to load data from the SPS-X file.
UKOOA p1-90 files import
UKOOA p1-90 files are imported using the Import UKOOA p1-90 menu command.
The program imports source and receiver coordinates and elevations from the UKOOA file; the rest of
the information contained in the file is ignored. Search for the values corresponding to sources and
receivers is performed based on the S and R identifiers (source and receiver identifier, respectively)
located at the beginning of the line in the file.
The dialog box is divided into two main areas that control the import of headers associated with sources
and receivers: Source format definition and Receiver format definition, respectively. Each import
block has four columns:
Header fields to which other values are linked are shown in blue. For source coordinates this is the FFID
(shot number) field, for receiver coordinates – the CHAN (channel number) field. Three repeating
CHAN header, coordinate and elevation fields are specified for receivers in the geometry assignment
block since the standard UKOAA p1-90 file has three columns for coordinates/elevations. By default
the range of values for each of the headers is specified in accordance with the standard UKOOA p1-90
file format.
Recommended procedure:
To load the file, click the File... button and choose the file you require. Its contents will be displayed in
the lower pane of the dialog box:
Left-clicking a line containing a header field will highlight the corresponding value column, following
the standard file format. If, for any reason, columns with header values appear to be invalid, they should
be redefined. To achieve this, follow these steps:
Values in the "Receiver format definition" block are assigned using a similar process.
The "Load template" and "Save Template" buttons enable you to save and load the current file import
settings. The "Load default" button restores the default UKOOA p190 file import settings based on the
current format.
Ensure that all values corresponding to the headers are valid, and then click "OK." The coordinate and
elevation values will be recorded in the specified headers of the current dataset within the database.
RadExPro header fields
To store auxiliary information on seismic data, the RadExPro program uses its own set of header fields.
They are kept in database files, separate from data files which provides fast access to the header fields.
When creating a new project, the set of header fields in the project is similar to trace headers in SEGY
format. However further the header fields can be edited, i.e. you can add new fields and delete and
rename the existing ones. To view and edit the active set of project header fields you should select the
menu command Database/Edit header fields... in the main window of the program (project tree
window).
In the new (or existing) header fields you can record various information. For example, seismic trace
static shift or arrival time picked on the trace. You can do mathematical operations with header field
values, convert them into picks, display changes of header values in different datasets, etc.
Below you can find a list of some most important standard RadExPro header fields. We advise you not
to delete these fields or change their meaning.
SFPIND: identification number of dataset used while displaying objects on the map. For 3D data, it is
given automatically when the file is registered in the project. In the case of 2D data, the profile number
is specified in the Profile ID field in the SEG-Y Input.
SOU_INL, SOU_CRL: the source number along the observation line, the source number across the
observation line
REC_INL, REC_CRL: the receiver number along the observation line, the receiver number across the
observation line
ILINE_NO, XLINE_NO: CDP number along the observation line, CDP number across the observation
line
A correspondence of RadExPro header fields to SEG-Y standard trace header is provided in the
description of the SEG-Y Output processing module.
Edit Header Fields
To view and edit an active set of project header fields, select the menu command Database/Edit
header... from the main window of the program (project tree window).
The window containing the list of headers from the active project will open:
To edit the existing header field, double-click the left mouse button (MB1) on the name of the required
field.
To delete an existing header, select it in the list with the left mouse button (MB1) and click the Delete
button. After the deletion has been confirmed the field will be deleted from the project.
When adding a new header field or editing the existing one, the following dialog box will appear:
Here you can specify/change the name, format, and description of the header. Enter/change the header
field name in the Name field.
Select the required format of the header from the list of available formats. The header field values can
be stored in one of the following formats:
In the Description field, specify the annotation or textual description of the header field.
After all parameters have been set, click the OK button. To exit without saving the changes click the
Cancel button.
DXF export
This menu entry allows saving horizon picks into the DXF format for further processing in GIS and
CAD systems. When the option is selected the following dialog appears:
On the left there is a representation of the project tree with picks. On the right there is a representation
of the resulting DXF file structure, possibly including layers and picks to be exported.
You can add picks to the DXF from the project either using the Add button or double-clicking the pick
name. You can remove picks from the DXF by either using the Remove button, or through a context
menu (right click on the pick name and select Delete... in the pop-up menu).
In the DXF you can create layers and save picks either to the root or/and to these layers. To create a
layer right-click the DXF structure field and select the New layer... command of the pop-up menu. When
created, the layer can be renamed: double-click it and type a new name.
When you are ready with populating the DXF structure with layers and picks, click the Save... or the
Ok and specify the name of the output file. If the Ok button was used the dialog will close, while using
the Save... button allows creating several DXF files within one session.
Coordinates. The two matching headers of the pick are interpreted as X and Y coordinates of the DXF
accordingly, while the pick value is considered as Z coordinate. In case both headers are the same (e.g.
CDP_X:CDP_X), the pick is considered as 2D. In this case, the header value is stored as X coordinate
of the DXF while Y coordinate is assigned 0.
Below there is an example of the RadExPro Plus picks exported to DXF and loaded to AutoCAD:
Processing modules
Data I/O (Data input/output)
Trace input
This module is designed to load seismic data files registered in the project into the flow.
Module parameters
When launching this module, the following window will appear:
The Add…button from the Data Set field calls a standard dialog box Database Object Selection (see
Database Object Selection window) with Choose dataset header, where you select the needed seismic
dataset.
You may choose one or several data sets. The names of the chosen datasets will appear in the Data Sets
list. You may change the position of data sets in the list using the buttons with arrows to the right of the
list (while reading the unsorted data using the Get all option may be useful).
The data sets, selected using the mouse can be removed from the field, using the Delete button.
From batch list – module supports data loading from a batch list.
The Add mask…button from the Datasets masks field allow to set a mask for uploading files (see
section Replica system).
Load headers only – to load into flow only headers of data set.
Memory resort – fast memory data resort. Choose buffer size in Mb according to the available computer
memory. This will significantly increase Trace Input speed due to efficient memory resorting algorithm.
The algorithm is similar to the one, implemented in Resort* module.
The Add…button from the Sort Fields field calls a standard dialog window, containing the list of header
fields.
• You need to choose sort keys from the list, in order to perform the input data sorting.
• The selected keys will appear in the list Sort Fields.
• Mutual position of keys on the list can be changed using the buttons with arrows to the right of
the list.
• You can delete selected keys from the list using the Delete button.
If the option Selection is on, the data—input into the flow—will be sorted in accordance with the sort
keys that are given in the field Sort Fields. You should write a line with the input range for each key in
the Selection field. A colon separates the ranges of different keys.
For example:
Let's assume that 2 sort keys are selected in the field Sort Fields. Then the range line in the Selection
field will look like the following:
*:* - all data, sorted according to two selected keys will be input
*:1000-2000(5) – the data, sorted by the second key will be input within the range of 1000 to 2000 with
5 as an increment.
The Select from file option loads the input string from the text file range. Click the button File…, choose
the required file with the text samples assignment from the pop-up window.
The Get all option allows loading the whole data set in the order they were recorded without any
additional sorting. You don’t need to assign sort keys and ranges. When you choose this option the
corresponding fields become inaccessible and their content doesn’t affect the result. If the field Data
sets contains several data sets, the first data set input in the flow will be the uppermost in the list, then
the next one, etc. To assign the required input sequence, use the buttons with arrows to the right of the
list Data sets.
Trace Output
This module is used to record the data from the flow into the program database in the format R4.
Module parameters
When launching this module, the following window will appear:
When clicking the File... button the Select Dataset dialog box will appear.
Here, in the Location field, define the path to where the file to be created will be stored. The name for
the file that will be created should be entered into Object name field or, if the user wants to save the file
under an old name, it should be selected from the list in the Objects field.
This module is designed to input external (with respect to the project) files in SEG_Y format into the
flow.
Module parameters
When this module is activated the following window appears:
In the dialog box you may choose one or several files from one directory (to choose several files use the
Ctrl and Shift buttons).
After you have chosen the files, their names will appear in the File(s) field. When you choose several
files check the correctness of their mutual position in the list: the uppermost file from the list will be
loaded to the flow first, then the next one, and so forth from top to bottom.
If you need, you may correct the mutual position of the files in the list. Select the file for which the
position is to be changed using the mouse and, using the corresponding buttons with arrows to the right
of the list, move it up or down.
You may delete one or several selected files from the list using the Delete button.
The other method is to save the current file list as a text file on a disk (Save list...) and to load the list of
files (Load list...).
Indicate the format to read the samples from the input file in the field Sample format.
Take format from file. If this box is checked, the module will automatically determine the file format
before loading the file. This is useful if you need to load several Seg-y files in different formats (sampling
interval, number of samples etc.).
• I1 – 8-bit integer
• I2 – 16-bit integer
• I4 – 32-bit integer
• R4 – 32-bit floating point real, or IBM floating point, or IEEE as a function of option IBM
Floating Point
Take byte order from file – if different byte order is used in different files, it can be determined
automatically from the file. You can also specify it manually:
Options Big-endian byte order (SEG-Y Standard) / Little-endian byte order set normal and reverse
byte order in the word.
After choosing SEG-Y files, the sampling interval in ms will be reflected in the field Sample interval,
the trace length in file, in samples, will be shown in the field Trace length. The values can be changed
by the user and any change influences the process of file reading. However the values taken from the
traces of the input file will still be in the header fields.
After opening the file, the number of traces in the Number of traces field in the selected SEG-Y file
will be displayed. This value is calculated depending on file size and trace length. If the user has changed
the trace length in the Trace length field, in order to recalculate the trace number in the file, click the
OK button and open the dialog box of the module in the flow again. A new value corresponding to the
changed trace length will appear in the Number of traces field.
The option Use trace weighting factor indicates the necessity to take into account the normalizing
coefficient during amplitude recovery, recorded in bytes 169–170 of the trace's header (see module SEG-
Y Output description). The option is checked by default, however in cases, when the bytes are filled in
incorrectly (for example, contain “trash”) this option needs to be toggled off.
The sort keys of the input data are indicated in the Sorted by field (by header names). The change of the
keys doesn’t have any impact on the real input data sorting, however it allows making the set by values’
ranges of the indicated keys using the Selection option.
The options Get all/Selection gets all the data or a part of the data confined by primary and secondary
keys, indicated in the field Sorted by. If we input a part of the data, then the limitations on the primary
and the secondary keys are indicated in the Selection field.
For example:
*:* - all data will be input,*: 1000–2000(5) - according to the secondary key the data will be input in the
range of 1000 to 2000 with step 5.
Depending on the type of input data (3D/2D) the 3D Survey/2D Survey options, respectively, should
be selected. If the input data are 2D data (2D Survey option is selected), then you should indicate the
unique profile number in the Profile ID field.
Remap header values - if this option is off, the header fields in the project database will be filled in
automatically on the basis of the standard definition of the selected format. If the option is on, the header
fields will be filled in accordance with format remapping. If the option is active then a text field for
remap specification as well as Save remap... and Load remap... buttons allowing you to save the active
remap in a database and to load the already saved remaps, respectively, via the Database Object
Selection become available.
RECNO – database header field name where the values from the trace header field specified in a remap
will be recorded;
4I – a field format in a trace header, i.e. a format in which the value is stored in the source file, in this
case it is a 32-byte integer.
IBM or IEEE – flag, identifying the representation standard used for the real floating pointnumbers.
Used for the 4R and 8R format only. For the integer types the flag is not used, and nothing is placed
between the two commas.
181 – offset of the required field in the trace header expressed in bytes;
1I - 8-byte integer;
2I - 16-byte integer;
4I - 32-byte integer;
4R - 32-byte floating-point real (either IBM or IEEE standard depending of the flag that follows);
8R – 64-byte floating-point real (either IBM or IEEE standard depending of the flag that follows).
Thus, RECNO,4I,,181 means that RECNO header field corresponding to the receiver point number will
be filled by 32-byte integer values read from trace headers starting with the 181th byte from the
beginning of the trace header.
The Save template... button saves the active remap into the database.
The Load template... button loads the remap previously saved in the database.
SEG-Y output
This module serves to save the (processed) data from the flow, into an external file in SEG-Y format
on disk.
Module parameters
When this module is activated the following window appears:
To enter the name of the input SEG-Y file where the data will be saved, click the Browse... button. After
the file has been chosen, its name and path will be displayed in the File field. You can enter the output
file name in the File field manually.
The option to upload a file using the replica system is also available (see the Replica system section).
The format in which the samples will be recorded into the file should be defined in the Sample format
field. Here:
• I1 – 8-byte integer
• I2– 16-byte integer
• I4 – 32-byte integer
• R4 – 32-byte real with floating point, or IBM floating point, or IEEE depending on IBM Floating
The SEG-Y Normal/Reverse byte order (MSB first)/(LSB first) options assign normal or reverse byte
order in a word, respectively.
The Trace weighting field saves data in integer format without loss of raw dynamic range. The options
of this field become accessible, if one of the integer formats is chosen in the field Sample format (I1,
I2, or I4).
According to SEG-Y format specifications, N value can possess the values 0, 1, … 32767. However, in
case the dynamic range of data exceeds that of the chosen integer format, there is an alternative option
of using negative values of N. To permit the usage of negative N values, check the option Allow negative
weighting factor (this option is accessible when you toggle on Allow trace weighting).
If you don’t use Trace weighting, or if you use only positive N values, in cases where the amplitude
values go beyond the dynamic range of the record format, the following warning appears during the run
of the module:
To toggle off the warning, toggle on the option Suppress out of range warnings.
The Scalars field allows setting the multipliers for the height/depth and coordinate values. If the
multiplier is positive, the value will be multiplied by it; if the multiplier is negative, the value will be
divided by it. Pressing the … button opens a dialog box where you can specify the headers the scalar
will be applied to.
The Remap header values option allows format remapping while SEG-Y-file recording (see Remap
Header Values in SEG-Y input chapter).
The Save remap and Load remap buttons save the active format remap into the project database and
loads a previously saved remap from the database, respectively.
RadExPro header fields corresponding to SEG-Y trace header by default
CHAN 13–16 Channel number (trace number within the original field
record)
OFFSET 37–40 Offset (distance from center of the source point to the
center of the receiver group)
This module is designed to input external (relative to the project) data from disk files in SEG-D format
into the flow.
Module parameters
The parameters dialog looks like the following:
As there are different ways to calculate the number of trace samples for different seismic stations, you
should indicate the calculation method in the parameter section Trace length:
• NP = (TE-TF)/dt + 1 – if a zero sample corresponds to the start time (TF from Channel Set
Descriptor), while the last sample corresponds to the end time (TE from Channel Set Descriptor)
• NP = (TE-TF)/dt – if there is no information for the end time (TE) in data
Override trace length – the given mode allows the user to specify the number of trace samples in
manual mode
The Apply pre-amplifier gain parameter allows choosing Station Type. The station type selected
affects the position in SEG-D file from where the descale multiplier (MP-factor) will be read.
If you specify the record type in the field Skip records of types, the records of the indicated type will
not be loaded by the module. If you indicate -1 in this field, the records of any type will be loaded. If
you have to indicate several record types, you should use a colon as the separator
The field Input channel type(s) indicates the channel type of traces input into the flow. Different types
of channels are separated by a colon. If you indicate -1 in this field, all channels will be input into the
flow.
The field Specify seismic data channel type(s) indicates the types of channels containing seismic data.
It is used along with Set auxiliary channel number to negative parameter. When it is set, the values of
the channel numbers, which are not considered seismic channels, are negative.
Suppress warnings. Exceptional cases can sometimes arise when reading data from files. If the cause
of such a situation is not critical, the module can process it in a special way (for instance, in the case of
a mismatch of sample numbers in different sets of channels) or proceed to read from another shot/file.
The notification to a user is made in the form of a small message window and the module stops operation
and waits until the user presses OK. Often that kind of behavior is unwanted, and to suppress the
notification on the screen you are recommended to tick the flag Suppress warnings. In this case all
messages will be written to a log-file, if it is indicated.
Time from stamp – take time from the GPS line in SEG-D (supported by some vendors).
Allow different DT and NUMSMP – if this box is checked, the module will search for the maximum
trace length in all readable channel sets, and will set it as default for all other traces. This helps avoid
accidental truncation of traces in the shot. The original number of samples and sampling interval will be
written to the trace headers. If the Suppress warnings checkbox is disabled, the sampling interval/trace
length mismatch warnings will be displayed.
Remap SEGD main header values - allows specifying Remap of shot header into the RadExPro
headers of traces being loaded, i.e., you can save a trace header with the indication of the number format
and its position (starting position + increment on channels set) in the file header. If the indicated header
is not registered in the list, the remap cannot come into effect. The number format must be one of the
following: 1I, 2I, 4I, #B, #C, 4RIBM, 4RIEEE, 8RIEEE. The line of header remap consists of six fields
and a symbol '/' at the end as a record separator, relating to different headers. The fields within the record
are separated by commas.
For example:
Field 1: YEAR – the name of the registered RadExPro trace header field
Field 2: Year of shot – description of header field (used only for record descriptive purposes), can be
omitted.
Field 3: 2B – format of value in the header SEGD. This field can be one of the following: 1I (1-byte
integer), 2I (2-byte integer), 4I (4-byte integer), #B (1–9 BCD numbers), #C (1–9 numbers in symbolic
notation), 4RIBM (4-byte IBM floating point), 4RIEEE (4-byte IEEE floating point), 8RIEEE (8-byte
IEEE double precision floating point). Number format BCD as well as numbers in symbolic notation are
indicated in the form: 'number B', for example, 3B for three numbers BCD or 4С for four numbers in
symbolic notation.
Field 4: 1:2:3 or flex – field to control channel sets. If this field is void, it means that all channel sets are
selected. If this field contains set numbers (1:2:3), to get a position to read the value from, the starting
position (field 6) is incremented by product (set number)*(increment (Field 5)). The set numbers should
be separated by colons. The headers are written only for traces that belong to the indicated sets. This
header will not be rewritten for traces from the other channel sets.
If the field contains the identifier flex, in order to get a position to read the value from, the start position
is incremented by the product (overall number of channel sets) * (increment).
Field 5: Increment - used together with field 4 to ensure flexibility in indication of the read value
position. If there is a number in this field the start position (Field 5) is incremented by the indicated
number of bytes for each channels’ set or for all sets of channels. If the field is void, the increment is
equal to zero. If the input number is not a BCD number, the increment should not be multiple of the
byte, as the byte contains two BCD numbers.
Here the value for traces from the first set are BCD numbers 1–3, and for the traces of the second set
BCD numbers 4–6.
Field 6: 10.5 – is the starting position in SEGD header. If the input number is a BCD number, the
increment can be in full or half bytes (nibbles). In that case, in order to read the first BCD digit from the
first nibble of the byte you should indicate the position that is half a byte less than the byte number. For
example, the first nibble of byte 3, has position 2.5, while the second nibble of the byte 3 has position 3.
Remap SEGD trace header values. Allows remapping (Remap) SEGD trace headers into RadExPro
trace headers. The remap line format is identical to the case of shot header; the only exception is that
field 4 and Field 5 should be void. Since the version, it's been possible to remap textual part of the
external header.
Here, #: sign is followed by an identifier name, the offset in bytes then is calculated from the indicated
identifier. For example: TIME_DIFF,,2C,#:*GCS90,,41/ -- the module will find “*GCS90” string and,
count 41-byte offset from it, then read 2 digits as symbols, convert them to a number, and assign this
number to TIME_DIFF header field. A number read from textual header this way can be either an integer
or a floating point.
Debug log file. All messages, as well as other useful information on the file being loaded (for instance
the main shot header, channel set headers, and trace headers) can be saved to a log-file. To do so you
can switch on the option Debug log file and indicate the text file name, where you prefer to output the
report on the job run. This possibility can be used for error diagnostics and exception cases.
SEG-D Input (Rev. 3) (Input of disk files in SEG-D rev 3.0, 3.1 format)
This module is designed to input disk files in SEG-D format, versions 3.0 and 3.1, which are external
to the project. The files in SEG-D format of this version have become more standardized. Such
parameters as coordinates, time, number of counts are now put in fixed places in the file, so many
parameters that were there before are not present in SEG-D Input version 3.
Module parameters
File(s)
The File(s) field is intended for selecting input files in SEG-D rev. 3.0, 3.1 format
The files to be loaded will be selected using the Select files... ( ) button. After the files have been
selected, their names will appear in the list. The files can be removed from the list using the Remove (
) button, and the Reset ( ) button makes it possible to get back to the default settings. It is also
possible to use arrows to change the order in which the files are loaded.
The files can be imported with the help of a replica or mask system using the Add file replica or
mask... ( ) button. When you click the button, a line appears in the Specify field, which specifies
how the files will be loaded:
• Select file... ( ) - select files for loading
• Select folder... ( ) - specifying the directory where the necessary file is located. After
selecting the directory, the file name shall be entered in the line either explicitly or using the
replica system (see the Replica system).
In order to load data from the batch list, select Yes against Take from batch list.
Parameters
Skip records of types - specify the types of records, which will not be loaded into the flow by the
module during its work. If you leave the list empty, all types of records will be loaded. If you need to
specify several types of records, use the Append item ( ) button to add a new list.
Input channel type(s) – specify the types of channels for which traces will be loaded into the flow.
Stop flow in case of file loading failure– stop loading data into the flow if there are any errors in the
module's execution.
Apply descale multiplier– automatically selects places where the multiplier for trace amplitude
normalization is read from when reading a file.
Allow different DT and NUMSMP – when the option is off, the initial number of counts and the
sampling rate are assigned to the trace headers (even if these parameters are different for each trace). If
checked, the total sampling time of the first trace in the flow is taken for all traces.
Take date/time from GPS timestamp – take date/time from GPS line in SEG-D.
Remap
Remap SEGD main header values –allows setting the shot header Remap in the headers of the traces
loaded. I.e. it is possible fill in a certain trace header, specifying the number format and its position
(initial position + increment by channel sets) in the file header. If the specified header is not registered,
there is no override for it. The number format must be one of the following: 1I, 2I, 3I, 4UI (the header
in which the information will be written must be of Real8 type), 4I, #B, #C, 4RIBM, 4RIEEE,
8RIEEE. The header override line consists of six fields and the '/' character at the end as a separator for
records relating to different headers. Fields within a record are separated by commas. Detailed
examples are explained in the chapter for the SEG-D input module.
Remap SEGD trace header values – allows setting a Remap of SEGD trace headers to SeisPro trace
headers. The format of the override line is similar to the case of the shot header, except that Field4 (the
channel set control field) and Field5 (the increment for Field4) must be empty.
Logging
Various exceptional situations can occur while reading data from files. In this case the module
continues its work and all warnings are written to the log file. In order to limit the amount of
information which is written to the log file, it is needed to specify parameters:
This module is designed to input external (with respect to the project) files in SEG-В format into the
flow.
Module parameters
The parameter dialog of the module is shown on the following picture:
Click Add... button to choose input files. In the dialog box you may choose one or several files from one
directory (to choose several files use the Ctrl and Shift buttons). After you have chosen the files, their
names will appear in the File(s) field. When you choose several files check the correctness of their
mutual position in the list: the uppermost file from the list will be loaded to the flow first, then the next
one, and so forth downwards. If you need, you may correct the mutual position of the files in the list.
Using the mouse select the file (or several files) for which the position is to be changed and, using the
corresponding buttons with arrows to the right of the list, move the selected file (or files) up or down.
You may delete one or several selected files from the list using the Delete button.
It is also possible to save the current file list as an ASCII text file on a disk (Save list...) and to load the
list of files (Load list...) from an ASCII text file.
IMPORTANT! All input files in the list must have the same trace lenghts and sample intervals!
When the files are selected the Trace Length (in samples) and and Sample Interval (in ms) are detected
from the headers of the last file in the list. You can check the values for any other file from the list by
double-clicking on its name.
If the trace length and/or sample interval are determined incorrectly, you can change the values manually.
When the traces are read from the files, the values of trace length and sample intervals are taken from
the corresponding dialog fields, not from the file headers.
Exceptional cases can sometimes arise when reading the data, e.g. sample interval indicated in the file
headers may differ from the value indicated in the Sample Interval field of the dialog that will be
actually used. Then a warning message box is demonstrated and the module operation stops waiting until
the user clicks the OK button. Often this kind of behavior is undesirable – set the Suppress warnings
not to see the warning messages.
SEG-2 Input
This module is designed to input external (with respect to the project) files in SEG-2 format into the
flow.
Module parameters
The parameter dialog of the module is shown on the following picture:
Click Add... button to choose input files. In the dialog box you may choose one or several files from one
directory (to choose several files use the Ctrl and Shift buttons). After you have chosen the files, their
names will appear in the File(s) field. When you choose several files check the correctness of their
mutual position in the list: the uppermost file from the list will be loaded to the flow first, then the next
one, and so forth downwards. If you need, you may correct the mutual position of the files in the list.
Using the mouse select the file (or several files) for which the position is to be changed and, using the
corresponding buttons with arrows to the right of the list, move the selected file (or files) up or down.
You may delete one or several selected files from the list using the Delete button.
It is also possible to save the current file list as an ASCII text file on a disk (Save list...) and to load the
list of files (Load list...) from an ASCII text file.
IMPORTANT! All input files in the list must have the same trace lenghts and sample intervals!
When the files are selected the Trace Length (in samples) and Sample Interval (in ms) are detected
from the headers of the last file in the list. You can check the values for any other file from the list by
double-clicking on its name.
If the trace length and/or sample interval are determined incorrectly, you can change the values manually.
When the traces are read from the files, the values of trace length and sample intervals are taken from
the corresponding dialog fields, not from the file headers.
Exceptional cases can sometimes arise when reading the data, e.g. sample interval indicated in the file
headers may differ from the value indicated in the Sample Interval field of the dialog that will be
actually used. Then a warning message box is demonstrated and the module operation stops waiting until
the user clicks the OK button. Often this kind of behavior is undesirable – set the Suppress warnings
not to see the warning messages.
SCS-3 Input
This module is designed to input the external (with respect to the project) SCS-3 files into the flow.
When this module is activated the following window appears:
To set the name for the input of the SCS-3-file, click the Browse... button. After the file has been chosen,
its name and path will appear in the File field. You can also enter the input file name manually in the
File field.
The Normal/Reverse byte order (МSB first)/(LSB first) options assign normal or reverse byte order
in a word, respectively.
The IBM Floating Point option defines the format for the real numbers (IBM floating point or IEEE).
The Number of traces field reflects the number of traces in the selected file.
In the Trace length field the number of trace counts in the selected file is displayed.
The Override Trace Length flag allows access to the value in the Trace length window. The number
of trace samples defined by the user will prevail over the number of samples calculated from the header
fields of the first trace.
The Remap header values (see Remap Header Values in SEG-Y input chapter) option allows format
remap setting. The Load template... button loads the remap previously saved in the database. The Save
template... button saves the current remap into database.
Text Output
The module is designed to output the data from the flow to an external ASCII file.
Module parameters
The parameter dialog of the module looks as following:
Click the Browse button to select an output file name, or type it manually to the File field.
As a result of the module operation, an ASCII file of the following type will be generated:
.....
The first column (T) contains the two-way time values corresponding to each of the samples of the
seismic traces.
The following columns are the traces. They contain the amplitudes of the samples. The column names
(Tr0000, Tr0001, Tr0002, etc.) contain the trace sequential numbers (0-based).
IMPORTANT! When saving to ASCII the module truncates the amplitudes to 2 decimal digits. This
may lead to the dynamic range loss (e.g. if all the amplitudes in the dataset varies within the range of -
0.009 - +0.009 the output file will contain only zeros). In such a situation it is recommended that the
Trace Math routine is placed in the flow in front of the Trace Output to multiply all amplitudes by a
constant.
Super Gather
This module creates and inserts into the flow, the trace sets (supergathers) composed of several CDP
gathers (in 2D case) or of several Inlines-Crosslines (in 3D case), breaking the created set into trace
groups with a specified constant offset step (binning), subsuming the binned traces within the set.
Module parameters
When this module is activated, the following window appears:
The Dataset... button opens the Database Object Selection dialog box, to, select a file from the
database.
Depending on whether the data are 2D or 3D the 2D Gather or 3D Gather option, is chosen.
• In the Start field, set the number for the CDP start point for the sets' creation, in the End field,
specify the end point.
• In the Step field, specify the the interval (in CDP numbers) between the neighboring sets.
• In the Range field, indicate the number of CDP points to be included in one set.
• In the Inline Start field, specify the start Inline for set's creation, in Inline End field, specify
the end Inline.
• In the Step field, specify the interval in Inline numbers between the neighboring sets.
• In the Range field, indicate the number of Inlines to be included in one set.
In the Xlines # parameter group you can set similar parameters for Crosslines.
The Bin Offsets option activation switches on the offset summing mode.
• The offset start (in the Start field) and offset end (in the End field) should be set.
• In the Step the distance between the traces is defined in meters.
• In the Range the interval around subsuming points is expressed in meters. All traces within this
interval will take part in subsuming.
The Save template and Load template buttons save the module parameters in the form of templates
into the project database and load the parameters from previously saved templates.
Data Input
This module has been designed to load external seismic data files into the flow. In most cases we
recommend not to use this option. If you need to load file into the flow and there is a specific module
for the file format, it would be preferable to apply the specific module rather than Data Input (for
example, to load files in SEG-Y format it would be preferable to use SEG-Y Input instead of Data
Input).
Data Output
This module saves the (processed) data from the flow into an external file on disk.
Module parameters
The dialog box for module parameters specification is similar to Data Input dialog box. Similar fields
have similar meaning. The Remap option and corresponding text field designed for format remapping
are similar to those of the SEG-Y input module (see Remap Header Values in SEG-Y input chapter)
tool.
In this case the File button opens a standard dialog box for selecting the name for the file to which the
data will be saved. This module saves the data in two standard formats (SEG-Y and SEG-2) and in the
format defined by the user.
If a user defined format is used , apply format remapping to save RadExPro header information into the
trace headers of the output file.
In SEG-2 format, the following extra fields will be recorded into the file header:
The following extra fields will be recorded into SEG-2 trace headers:
• DELAY 0
• RECEIVER LOCATION - trace_number*dx
• SAMPLE_INTERVAL - dt
WARNING: When saving processed data in a user defined format with integer types of data, pay
attention to the dynamic range of the samples. For example, after automating gain control the sample
values range from -1 to 1 and while saving this data in I2 number format the recorded samples will take
on only three values - -1, 0, 1. To save this kind of data in integer formats you can use the Amplitude
correction routine, for example, and set the Time variant scaling equal to a constant for the whole trace.
The constant is defined from the ratio of flow data range to the range of integer format under use. (The
I1 range is 127–128. The I2 range is –32,768–32,767. The I4 range is 2,147,483,648 – 2,147,483,647.)
LOGIS
When selecting the GPR LOGIS (OKO) format data input routine the dialog box shown on the following
figure will appear. Press the Browse button to select a GPR file in LOGIS format. If you click the “i”
button the window containing information about the selected file will open.
When you chose the GSSI format of GPR data input the window shown on the figure below will open.
Click the Browse button to select a GPR file in SIR GSSI (*.dzt file extension) format. The name of
selected file will display in File field. To confirm the data file selection press OK.
Dataset Math
The module performs trace by trace arithmetics with 2 datasets and input the result into the flow. The
most typical usage is to subtract processing result from the input wavefield to check the difference.
Module parameters
Module parameters are shown below:
Use buttons to select First dataset and Second dataset. Select arithmetic operation to be
performed: Subtract or Add. If subtracting, samples of each trace of the second dataset will be
subtracted from those of sequentially the same trace of the first one; if adding – they will be added. The
result will be input to the flow.
Using radio-buttons to the right of the datset names, you can select which set of headers shall be
associated with the input resulting traces – taken either from the first of from the second dataset.
Dataset Merge
This module opens two datasets, merges them, and adds the result to the workflow. This procedure
enables easy processing of any data in the window mode.
Module parameters
Select the boundary for merging the two datasets (it should be written as the header in one of the
datasets).
Tapering width – overlapping width in ms, measured “upwards” and “downwards” from the boundary.
Weight coefficients will be applied to the traces falling within this window to ensure a smoother
transition between the datasets.
The Input headers toggle buttons to the right of the dataset fields can be used to select which header
set will be imported into the workflow together with the boundary pick – the one from the first dataset,
or the one from the second dataset.
Use the buttons to select the First dataset and the Second dataset, and specify the operation to be
performed on the datasets: Subtract or Add. If subtraction is selected, the values of traces in the second
dataset will be subtracted from the corresponding values of traces with the same number in the first
dataset; if addition is selected, the values will be added. The result will be loaded into the workflow.
Load Text Trace
To do this, press the …(Browse) button and select the text file containing the trace. The values in the
file must be formatted in a single column without headers.
Specify dt – sampling interval in ms, and number of samples – number of samples in the trace.
3D Time Slice Input
The module is designed to obtain time slices for the selected seismic cube. Slices obtained with the help
of the module can be visualized using Screen Display.
ATTENTION! For correct slice view in Screen Display module, one of axes (Inline or Xline) should
be chosen as a time axis.
Data requirements:
3D Slice Input work window. Panel 1 — it's necessary to specify database dataset to be used to obtain slices.
Panel 2 — time slices should be specified. Panel 3 — it's necessary to specify display format as well as the trace
header to record slice time.
Parameters:
Input:
Output:
• Time axis in output slices – trace header (ILINE_NO or XLINE_NO) to be converted to time
axis for an output slice. Other header is converted to X axis of an output slice.
• Header field for time values– header where time value (in milliseconds) of all output slice traces
will be recorded. This header will be the same for the whole ensemble (1 slice).
Dataset Import
The module provides import functionality of RadExPro datasets (*.rdx) directly from a processing flow
with replica support.
Dataset Export
The module provides output functionality of RadExPro datasets (*.rdx) directly from a processing flow
with replica support.
RAMAC/GPR
Module parameters
When selecting the RAMAC/GPR format data input routine the following dialog box will appear.
Click the Browse button to select a file in RAMАC/GPR (*.rad file extension). After that information
about the file will appear in the dialog box fields.
In the Profile length group in the first field, the number of traces (GPR shots) recorded in the file will
be displayed. In the second field, the profile length in meters will be shown, “Incr. (dL)=” will show
the profile increment, i.e. distance in meters between the adjacent traces . In case if the profile increment
is determined incorrectly you can change it by specifying the new value.
NOTE: the program reads the number of samples and sample rate from the first file in the list and uses
these values for the other files. Thus, the module expects files of the same format, with the same
number of samples and time step for import.
Compute spatial derivative - if this option is checked, the output traces will be derived from
difference between two traces, the distance between which is equal to the Gauge length.
EXAMPLE: if Gauge length = 2, the difference between trace No. 2 and trace No. 1 will be written
to output trace No. 1, the difference between trace No. 3 and trace No. 1 will be written to output trace
No. 2.
Fotech HDF5 input
The module aims to import HDF5 files in Fotech format.
NOTE: the program reads the number of samples and sample rate from the first file in the list and uses
these values for the other files. Thus, the module expects files of the same format, with the same
number of samples and time step for import.
Compute spatial derivative - if this option is checked, the output traces will be derived from
difference between two traces, the distance between which is equal to the Gauge length.
EXAMPLE: if Gauge length = 2, the difference between trace No. 2 and trace No. 1 will be written
to output trace No. 1, the difference between trace No. 3 and trace No. 1 will be written to output trace
No. 2.
Silixa Input
Module for importing data of distributed acoustic systems in binary format (TDSm).
Module parameters
Add – add a file to import.
Parameters
Number of clones – the number of clones for each trace.
PRODML HDF5 input
The module aims to import HDF5 files in PRODML format.
NOTE: the program reads the number of samples and sample rate from the first file in the list and uses
these values for the other files. Thus, the module expects files of the same format, with the same
number of samples and time step for import.
NOTE: the program reads the number of samples and sample rate from the first file in the list and uses
these values for the other files. Thus, the module expects files of the same format, with the same
number of samples and time step for import.
ATTENTION! To ensure proper module operation, a header containing the number of microseconds
from the beginning of GPS epoch time must be filled. The header can be filled using the Remap option
in the Seg-D Input module.
Data requirements:
• The input traces should be organized in blocks, each of which corresponds to a specific
continuous recording time interval. In other words, all traces within a block should have the
same start time.
• Sequential time blocks should correspond to adjacent time intervals, allowing for the
generation of output seismograms that span the boundaries of time blocks.
• The time of successive blocks should obviously increase.
The sole restriction on traces within a time block is that their start times must be identical. Once a trace
with a different start time is encountered, it will be considered as belonging to the next time block.
IMPORTANT: when running the flow in Frame Mode, the time block should not be divided between
frames. In other words, all traces within a time block should be included in a single frame. On the
other hand, there is no requirement for only one time block per frame; multiple time blocks can be
accommodated within a frame, as long as they are entirely contained within that frame.
Module parameters:
• Source events file path – select a file that stores information about the shot number and the
corresponding time in μs. At the moment, the module supports the following types of files:
o file with txt extension
o file with cvs extension
• Event id column number – column with information about the shot number
• Time column number – column with information about the shot time, recorded in one of the
following formats:
o timestamps in GPS format in microseconds (integers, 16 characters)
The Trace Header Math module is meant for doing mathematical operations with values of existing
headers. The operations should be specified in the form of equations.
Module parameters
When the module is activated the following window appears:
General syntaxes
<header name1>=<form1>
Color coding:
Blue – known headers.
Red – unknown headers and syntax errors.
Green – comments.
A form may include digital constants, header field values, functions, mathematical operation:
Digital constants:
1 5.6 3.81e5
Mathematical operations:
+ addition,
- subtraction,
* multiplication
/ division
^
involution
Functions:
sin(x) - x sine;
cos(x) - x cosine;
tg(x) - x tangent;
ctg(x) - x cotangent;
arcsin(x) - x arcsine;
arcos(x) - x arccosine;
arctg(x) - x arctangent;
exp(x) - x exponent;
sqr(x) -х2;
sinc(x) = sin(x)/x;
sign(x) - sign: gives -1, for x < 0, 0 for x = 0 and 1 for x > 0;
When specifying a condition you can use the following logical operations:
< - less,
> - more,
= - equal,
| - logical OR,
! - logical negation;
Operation examples
sfpind=1
ffid = trunc((traceno-1)/24) + 1
Buttons
The Check syntax buttons allows verifying the syntax of the expressions directly from the dialog box.
If there are no syntax errors, the following message will be displayed when you press this button:
If the syntax is incorrect, the error message will show the line number and position immediately after
the syntax error:
The Line and Pos values for the current cursor position are displayed below the editing field.
The Save template and Load template buttons allow saving the current equations to the project database
template and loading the equations from a previously saved template, respectively.
NOTE: The text buffer of the module can accommodate 200 characters. When a greater number of
characters is typed, the buffer can become overfilled and an error message will appear. In this case,
reduce the number of operations. To fulfill the rest of operations, enter another copy of module into the
flow.
Header Averager
The module is designed for averaging values of the indicated header in traces that appear in the flow.
Averaging is performed in a sliding window of the given length. As a result of the module operation
the raw header values are rewritten.
Parameters
Trace header name – the header, the values of which are going to be averaged
Window length, [traces] – the length of the sliding window (number of traces), in which the averaging
will be performed.
Honor ensemble boundaries – if this option is on, the averaging will be carried out within the ensemble
boundaries independently.
• Normal – the value calculated for each position of the running window is written to the central
point of the window,
• Subtraction – the calculated value is subtracted from the value if the central point of the window.
Shift header
The module is designed for shifting values of the indicated header (Source header) in traces that appear
in the flow by the indicated trace number (Shift (traces)). The shifted values are written to another
header field (Target Header). The module can be used for example, in order to account for the distance
between the GPS antenna and the seismic system while processing single channel offshore data.
Module parameters
Source header - select the header to be shifted by the specified number of traces.
Target header – select the header where the shifted header values will be written.
Shift:
• Constant – the option when the header will be shifted by the user-specified number of traces
in the Shift amount field
• From header– the option when the header will be shifted by the value from the header
specified in the Shift amount field
The module is designed for exchange of trace header information between the data passing through the
flow and a dataset in the project database.
In most cases, any changes made to the trace headers during flow execution are saved together with the
seismic data to a new dataset by the Trace Output routine. However, this way of saving information in
many cases is not convenient. When the only aim of a flow is to fill in/modify several trace header fields,
without any processing of the seismic traces, it would be most logical to re-write the headers of the
original input dataset rather than to create a new one with the modified headers. The Header↔Dataset
Transfer routine provides necessary facilities for this operation.
The module can operate in 2 modes: (1) project header fields from the flow to a dataset, or (2) insert
trace header values of a dataset to the header fields of the data in the flow.
Module parameters
Set up the mode of the operation in the Header transfer direction field: either FROM dataset TO
header (header of the data in the flow) or FROM header TO dataset.
Click the ... button to the right of the Dataset field to select a dataset to exchange trace header
information with. The name of the selected dataset will be displayed in the string to the left of the button.
In the Match by fields field, select names of the trace header fields that will be used to match the traces
from the flow to the dataset. You can type the header field names manually (separated by comma) or use
the H... button to select them from the list.
In the Assign fields field, select names of the trace header fields that will be assigned as a result of the
module operation. You can type the header field names manually (separated by comma) or use the H...
button to select them from the list.
Example:
Assume, the module operates in the FROM header TO dataset mode, i.e. the values of the trace headers
from the flow will be written to the dataset. The Match be fields string contains the following:
FFID,CHAN
PICK1, PICK2
As a result of the module operation, for each trace in the flow the following operations will be made:
• In the dataset, all traces with FFID and CHAN equal to those of the current trace in the flow will
be found.
• For all these traces in the dataset, PICK1 and PICK2 header fields will be assigned by the values
of the PICK1 and PICK2 header fields of the current trace in the flow.
Surface-Consistent Calibration*
Module parameters
In the X axis field specify a header word indicating a source of the first distortion (e.g. source point).
In the Y axis field specify a header word, indicating a source of the second distortion (e.g. receiver
point).
In the Header field specify a header word containing values that are to be calibrated.
Click ... button to the right of the Dataset field to select a dataset which header values are to be calibrated.
Amplitude calibration – when selected this switch indicates that the calibrated values are to be
considered as amplitudes. In this case, the logarithm of the original values is calibrated.
Amplitude threshold – the values lower than the threshold are not used for calibration.
If it is need to make the calibrating process iterative, Max number of iterations, Tolerance and
Accelerator parameters are to be used. Note that the duration of the iterative calibration process
increases proportionally to square of the number of seismic traces. For acceleration of this process one
could slightly decrease the number of conditions under test within each iteration increasing the
Accelerator parameter (1 = no acceleration).
The breaking iterations condition could set through the Tolerance parameter or indicated directly as a
Max number of iterations.
Compute Line Length
The module is dedicated to calculation of the cumulative distance of each trace from the beginning of
the line. For the i-th trace in the flow, its distance L(i) is evaluated as:
(i) = L(i-1) + dL(i-1, i). where dL(i-1, i) – distance between trace i-1 and trace i.
Module parameters
You shall indicate the header fields with the X and Y coordinates of each trace are stored, which will
be the input for distance calculations, as well as the output header field where the calculated L values
will be stored.
Near-Surface Geometry Input
This module is used to assign geometry to field data obtained using the CMP, refraction seismic survey
and surface wave analysis methods.
Module operation
IMPORTANT! For the module to function correctly, the data must be input in the order of their
sequence along the profile, since the coordinates are calculated sequentially for each source position.
The following header fields are assigned to the source data set as a result of running the module:
The following procedure is recommended: load the data set using the Trace Input module → Assign
the geometry using the Geometry Input module → Output the current data set into a separate data set
with the assigned geometry using the Trace Output module.
The main dialog box of the module is divided into two tabs: assignment of geometry to data obtained
using the CMP and surface wave analysis methods (Reflection/MASW) and assignment of geometry to
data obtained using the refraction seismic survey method (Refraction). Each array type has its
corresponding interactive image that visually displays the current parameter of the tab being edited (for
example, receiver position, source position etc.).
Module parameters
The module interface is shown below:
Reflection/MASW
This tab allows assigning geometry to data obtained using the CMP and surface wave analysis methods.
Fixed mode
This mode is used when the receiver array is fixed along the profile and the source positions are moved
along the array. In this case the following initial parameters need to be entered in order to calculate the
geometry (all distances are specified in meters).
Parameters:
• First source position – coordinate of the first source position.
• Source step – source position step.
• Number of channels – number of receiver array channels.
• First receiver position – coordinate of the first channel.
• Receiver step – channel step.
• Bin size – bin size. Data obtained using the CMP method need to have the CMP number
specified in the CDP header field for further processing. Enter the preferred bin size in this field
to calculate the corresponding header.
• Number of shots at one point – number of shots per one profile point. If several observations
were made at a single profile point, specify their number in this field. This value may reflect both
the number aggregations at the source position (if they were not stacked in the field) and different
shot types (left/right). All shots at the same source position will be assigned the same source
position coordinate.
• Reassign FFID and CHAN headers – if the header fields corresponding to the source position
(FFID) and channel (CHAN) were left empty or were specified incorrectly, this option allows
recalculating them based on the entered array parameters. Make sure that the data are input
sequentially.
Variable mode
This mode is used to calculate the geometry of data obtained using an end-on array (the receiver array
is moved together with the source along the profile at a certain step).
Parameters
Reassign FFID and CHAN headers – if the header fields corresponding to the source position (FFID)
and channel (CHAN) were left empty or were specified incorrectly, this option allows recalculating
them based on the entered array parameters. Make sure that the data are input sequentially.
Refraction mode
Refraction– this tab allows assigning geometry to data obtained using the refraction seismic survey
method.
There are three main areas of the tab: 1) receiver line parameters, 2) offset source position coordinates
, 3) streamer source position coordinates. The source position coordinates may be specified either with
a fixed or with a variable step.
• Const Step
▪ First source position – coordinate of the first source.
▪ Source step – source position step.
• Variable Step If the source position step was not fixed, select the option. A table will open,
allowing you to specify the source position coordinates on the streamer in the source position
number – coordinate format. The number of lines in the table is the same as the value entered
in the Number of sources field. Enter the coordinates of all source positions on the streamer
(sequentially along the profile) into the table.
Offset sources – coordinates of the source positions at offsets specified separately for the “forward” and
“reverse” source positions. As in the previous case, the source positions can have either fixed or variable
step.
• Const Step. To specify the coordinates with a fixed step, select the Const Step option and enter
the following parameters:
▪ Number of forward sources – number of “forward” source positions.
▪ Number of reverse sources – number of “reverse” source positions.
▪ Forward step – “forward” source position step along the profile, starting with the first
channel. For example: if the first channel coordinate is 0 and the number of source
positions is 2 with the step of 5 m, the source position coordinates will have the values of
-5 and -10 meters.
▪ Reverse step – “reverse” source position step along the profile, starting with the last
channel. To specify the coordinates of variable-step source positions, select the Variable
Step option and enter the source position coordinates into the tables. The number of lines
in the tables is the same as the values entered in the Number of forward sources and
Number of reverse sources fields.
NOTE! Images displaying the various types of arrays are schematic and static. For example, the
number of hammers representing the source positions does not change when the corresponding
parameter (Number of sources) is changed on any of the tabs. Only a tip – a highlight or an arrow
pointing to the parameter – is displayed when any of the parameters is changed.
Crooked line 2D binning*
The module is designed for binning CDP 2D profiles with arbitrary geometry. The module is stand-
alone, i.e. it should be alone in the flow and requires no input-output modules.
IMPORTANT! To ensure correct work of the module, it is necessary to provide the dataset, for which
binning of every trace is to be performed, with the correcly filled headers as follows: SOU_X and
SOU_Y – source coordinates, REC_X and REC_Y – receiver coordinates, SOURCE – source point
ordinal number. The input dataset must be sorted by SOURCE.
As a result of the module work, the following headers will be filled in the binned dataset: CDP – CDP
number (if the point did not fall in any of the bins, the ordinal bin number equals -1), CDP_X, CDP_Y
– CDP coordinates (bin center), OFFSET – calculated distance between receivers (always positive),
TR_FOLD – repetition factor for the given CDP.
When adding a module into the module flow, a pop-up window of parameters is displayed:
Parameters
Select Scheme – in this field, the so-called binning “scheme” is configured. The scheme is an object of
the database and may be saved at any of its levels. The use of the scheme makes it possible to save
current status of the binning (profiles, binning parameters, other settings). To create a new scheme or to
select the existing one, press Browse button to the right of the field.
Calculate azimuths – when selecting the given option, a source direction for each trace will be recorded
into the header selected from the list.
Working with module
Press the Add profile button and select the dataset (profile) to be binned out of the project database.
After the profile has been selected, its name will be displayed in the tree on the left. On the right, in the
binning run screen, the midpoints of all traces (positions of the RP-SP segment midpoint) will be
displayed. One binning project (scheme) can contain several profiles.
In order to make a profile active, select it out of the list of available profiles. The midpoints of active
profiles will be highlighted in green.
You can also display the positions of sources by pressing the button in the toolbar (red spots) and
the positions of receivers by pressing the button in the toolbar (blue spots).
To start binning, click the right mouse button on the profile in the list and select the Add binning line
option
In the pop-up window, indicate the bin size (Bin size) as well as swath range vertically to the trace
(Swath range).
The midpoints found in the binning line will be distributed among the bins – in the CDP header of the
matching traces the CDP number (ordinal bin number) will be recorded. If a midpoint did not hit the
swath, number -1 will be recorded in the CDP header of the matching trace. Further on, such traces may
be removed from processing by setting up the relevant selection range in the Trace Input module.
After the parameters have been specified, press OK, and the program will build a binning auto-line
according to the indicated parameters. The binning line will appear on the screen and its name will be
displayed in a panel on the left in the profile tree.
Every profile can have several alternative binning lines; however, at the end of the work only one line
may be applied to a profile.
Once built, a binning line can be edited manually (see Binning line manual editing). Binning auto-line
settings may be edited by means of the Parameters→Auto-line parameters option (see Binning auto-
line settings).
Click the right mouse button on the name of the binning line in the profile tree to open a pop-up menu.
Select the Show bins option to display in the module desktop the bins partitioning lengthwise:
After the binning line has been set completely, click the right mouse button to select from the list the
binning line that you would like to apply to the dataset and press Apply binning to dataset
As a result, the CDP numbers values as well as other calculated values will be recorded in the dataset
headers.
It is possible to move, delete or add nodes, thus change the line shape (for example, to build a more
accurate line). For line editing, use the following mouse commands:
While binning, use the following options in the toolbar for navigation and zooming:
– align the scale correlation by matching the vertical scale to the horizontal one
– align the scale correlation by matching the horizontal scale to the vertical one
To move the zoomed image, use the scrolling bars or the mouse – drag and drop the image to a new
position with the left mouse button.
Binning auto-line settings may be edited by means of the Parameters→Auto-line parameters option.
As generally the midpoints coordinates do not fall in a single line, with an automatic binning line
computation it is possible to indicate the rules to be followed by the mechanism of the line midpoints
approximation cloud.
The algorithm of the automatic line computation moves along the RP line (in ascending scale of the
SOURCE field values) as a window of a specified size with a specified step. For each position of the
window, the binning line node is computed as an average of midpoints found in this window. It is
possible to use all the midpoints found in this window or reject certain percent of far offsets, so that the
resultative line will be drawn to near offsets. Then, by default, the received nodes are reduced; the nodes,
in which the line direction does not change significantly, are rejected.
Window size – the size of the midpoints averaging window (in the number of bins).
Overlapping – the width of overlap with the averaging window displacement (in the number of bins).
Rejection parameters – this group of parameters makes it possible to set points reduction and far offsets
rejection.
A node rejection criterion is the proximity of the node line to a straight line. For each node, except for
the extreme one, it is necessary to compute an angle between the direction from the previous node to the
current one and the direction from the current node to the following one. If this angle is less than the
specified in the Angle field threshold value in grades (i.e. the node direction does not change
dramatically), the node is rejected.
Far offset rejection – if this option is activated, with the midpoints averaging a certain percent of the
farthest offsets is rejected. Thus, the resultative binning line will be drawn to near offsets. The far offsets
rejection percent is specified in the Percent to reject field (0 value is identical to the case of the option
being deselected; with the value of 100, the binning line will be traced along the RP line).
Press the button in the toolbar, and the midpoints of the active profile will be coloured depending
on the distance between the receivers and according to the selected colour palette. The palette is
configured via the Parameters→Offset palette menu.
The module makes it possible to show in the status bar some additional information for a corresponding
trace when moving the pointer over a midpoint. To select the information to be displayed, use the
Parameters→Headers menu option. A pop-up window will appear:
Checking the necessary options makes it possible to show the values of the fields of the FFID and
SOURCE headers from the active dataset as well as those computed in the module RP- and SP-
coordinates-based offset values (OFFSET) and the CDP number (CDP), i.e. the number of the bin met
by the trace in compliance with the active binning line.
ATTENTION! OFFSET and CDP values displayed in the status bar are computed in the module and
will be written into the dataset headers only after the current binning has been applied thereto. (Apply
binning to dataset command is performed).
Select the File→Load background image menu option to download a background image to the program
desktop (e.g., a location map). In the opened pop-up window press Browse to select the file with picture
in one of the available graphic formats. Then indicate the coordinates of the picture angles:
In order to remove from the desktop a previously downloaded image, use the File->Delete background
image menu option.
Header Output
This module allows exporting the headers to a text file.
The module supports the Batch mode and the use of the replica system
Parameters
Export fields – headers to be exported to the file:
The module can simultaneously load data from three file types – SPS X (XPS), SPS S (SPS), SPS R
(RPS).
Module parameters
• Browse – select the file to load. You can also select it in the import window that opens when
you press the Layout button.
• Layout – opens the Import SPS X file dialog box where you configure the geometry file parsing
mode. For detailed information about this dialog box, see Importing SPS X files.
A detailed description of the SPS file loading algorithm is provided in the manual “Adding Geometry
to 3D Data and CDP Binning” available on our website RadExPro.ru under Download/Manuals.
Import UKOOA P1-90
This module is used to load geometry from a P1-90 file into the workflow.
You can select the file to load in the dialog box or directly in the geometry import window (Import
UKOOA P1-90 file). For detailed information about the Import UKOOA P1-90 file window, see
Importing UKOOA P1-90 files.
Parameters
Browse… – select a geometry file in the P1-90 format.
Layout – opens the geometry import window (Import UKOOA P1-90 file).
3D CDP Binning
This module is used for in-flow binning of 3D data by common depth points. The grid can be defined
in two ways – manually or from the database.
Instructions for using the interactive version of the 3D CDP Binning module are provided below.
Module parameters
Re-assign offsets – overwrite the current offset values with the post-binning ones.
• The Save grid and Load grid buttons allow saving the grid parameters as a database object and
loading a bin grid previously saved to the database, respectively.
Load from data base – load the grid from the database.
Pressing the Apply or Cancel button exits the binning mode. When the Apply button is pressed, the
selected binning grid parameters are saved to the internal geometry table. When the Cancel button is
pressed, the latest changes are cancelled, and the table is not overwritten.
Header Enumerator
The Header Enumerator module is designed for automatic filling in of values of the specified header
according to the specified parameters.
Parameters
In the drop-down list, you must select a Header, which will be automatically filled in with corresponding
values. Specify initial value in the Start Value field, and an increment value, in the Step field.
Numbering of header values is implemented according to the rules defined in the end of the dialog box.
Depending on the selected option, a certain part of the dialog box elements will be inactive. The
following are module operation options:
• Ensemble: numbering of the selected header values is implemented within ensembles. After the
last trace in an ensemble is reached, the numbering of the next ensemble will start again with the
value of the Start Value field and the step of the Step field;
• Continuous numbering: numbering of the selected header values is implemented continuously.
In this case, two additional options will be available:
▪ Sequentially: header values are numbered sequentially, starting with the value of the
Start Value field and with the step of the Step field;
▪ Reset start value at header changes: the first trace in the stream is assigned the Start
Value. Then, header values are sequentially numbered at the step of the Step value until
the value of another "reference" header has changed. This reference header must be
selected in the dropdown list below the Reset start value at header changes field.
NOTE: Using the Framed mode does not affect the numbering of headings. After the next set of traces
has been loaded, numbering will continue in compliance with selected module parameters.
Profiles Crossing
This module allows searching crossing of profiles by traces coordinates which are in CDP_X and
CDP_Y headers.
ATTENTION! Before profile loading to the module, you should ensure that they meet the following
conditions:
1) CDP_X and CDP_Y headers are assigned for each trace of the profiles being loaded.
2) A unique ID number is determined for each profile and it is assigned to one of headers of
each trace. The header name should be the same for all profiles.
Example 1: you need to load 2 profiles.
ID=1 corresponds to Profile No. 1, ID=2 corresponds to Profile No. 2. To do this, create a
separate "Profile_ID" header, then assign value "1" for each trace of Profile 1, and value "2"
for each trace of Profile 2.
3) Sorting of traces in each profile is natural, where traces form a seismic cross section when
visualized in Screen Display.
Parameters
• Profile ID header— select the header to which profile ID is assigned. It’s value must be
unique for each profile, and not be equal to Impossible profile ID, which will be described
below.
• Crossing ID header — in this field, it's necessary to specify the header to which the crossing
profile ID will be assigned. It will be done for several traces at the cross-point, at other points
header value is equal to Impossible profile ID.
• Impossible profile ID — the value to be assigned to Crossing ID header (see previous clause)
outside cross-points is specified.
ATTENTION! Crossing ID header type shall be equal to Profile ID header data type. We recommend
using Real type.
Transversal profile with a display of main grid of profiles. Cross points are displayed with vertical lines, the
caption on top is the profile ID from the corresponding header.
2D Flex Binning
The module is designed to infill empty offset bins with traces. This need arises when the seismic data
is non-uniformly distributed over the area.
1) The input data represents a set of trace ensembles. In the set, the ensembles should be arranged
in the same order as at the profile. For example, the CDP or FFID ensembles going consecutively.
2) Within each ensemble, the traces must be sorted by SOURCE-RECEIVER distance (OFFSET
header).
Examples of the possible sorting: CDP-OFFSET or FFID-OFFSET.
1) The module splits the input trace ensembles into offset bins, assigning numbers to them
depending on the offset.
2) The number of traces is calculated for each ensemble in each offset bin.
3) If there are no traces in the offset bin, the module checks the adjacent ensembles for the
presence of traces in offset bins with the same number as the empty one. If there are any, the
module copies the traces into the empty offset bin
The figure shows 3 ensembles corresponding to three consecutive CDP points. Let us say, the ensemble
corresponding to CDP #2 is central. The user sets the size of the Flex bin, which shows how many ensembles
around the central one will be analyzed in order to find the trace for empty offset bin. In the figure, the size of
Flex bin is equal to three ensembles. Then each ensemble is divided into offset bins, each of which is given its
own sequence number. If the trace is lacking in the central ensemble in one of the offset bins, the algorithm
checks the presence of traces in adjacent ensembles (1 CDP and 2 CDP in the figure) in offset bins with the
same sequence number, and in case of their detection copies them to the central one in the empty offset bin.
Parameters
Offset bin grid:
• Distance to center of nearest bin. The parameter sets the position of the binning grid inside the
ensembles. The binning grid extends to the right and to the left from this value, taking into
account the Bin size.
• Bin size. The parameter sets the offset bin size.
• Maximum number of traces per offset bin at each CDP. The parameter sets the maximal
possible number of traces within the offset bin. In case the number of traces inside the bin exceeds
the specified number, the module will remove the excessive traces, leaving the ones that are
located closer to the center of the offset bin.
Central offset bin image. The length of the green arrow is set by the parameter Distance to center of nearest bin,
and the size of the orange interval is set by the Bin size parameter.
Flex bin grid. The parameter sets the number of ensembles that are used when filling out the empty
offset bins in the central ensemble. Thus, ensemble "2 CDP" in the figure at the beginning of the
description is central. In case there are empty offset bins in this ensemble, they will be filled using traces
from "1 CDP" and "3 CDP". The size of flex bin in this case will be equal to 3 ensembles.
• Fixed value for all offset values (offset bins) the number of ensembles considered on each side
of the central one is the same and is specified by the user.
• Variable value – the user sets the size of the flex bin for each offset value (offset bin).
Filling format:
Offset1, Offset2, Offset3 – values of offset; N1, N2, N3 – the number of ensembles on each side of the
central one in the flex bin. Between the given values N is interpolated.
Write trace header fields. The given parameters are responsible for the headers that will be filled for
each trace at the module output:
• Offset bin index –the number of the offset bin, in which the trace falls, is indicated in the
header.
• Offset bin center –the coordinate of the offset bin center, to which the trace belongs, is
indicated in the header.
• Trace type – the type of the trace is indicated in the header: copied (1) or original (0).
Trace to Header
The module is designed to transfer the amplitude values of each sample of the input dataset traces into
the headers of a new dataset. This is a technical task that allows you to draw traces in the form of a graph
in the Seismic Display module.
Output data: dataset, to the headers of which the amplitude values of each sample of the input dataset
traces are assigned.
1) The user sets the output dataset headers, to which amplitude values of each sample of the input
dataset traces will be assigned. Suppose that these headers were created by the user and are
called "TRACE 1", "TRACE 2", "TRACE 3" ... "TRACE N" One header corresponds to one
input trace.
2) The dataset is created by the algorithm. The number of traces in the new dataset is equal to the
number of samples in the input data traces, and the length is 1 sample.
3) Then the module operates as follows: amplitude values of the samples of the first trace are
assigned to the header of the new dataset specified by the user (Let us say, "TRACE1"), for the
samples of the second trace - to the header "TRACE2", etc.
Trace to Header module operation principle. The input trace with a length of N samples is shown to the right.
The amplitude values of each trace samples (in c.u.) are assigned to the output dataset header table to the header
"TRACE_1"
Module operation window. To add the headers to which the values of the traces will be assigned in the output
dataset, click on the "Add" button, and to remove the header click on the "Remove" button.
Parameters
• Add… – allows to add headers for assigning sample values.
• Remove – allows to remove the headers from the list.
• Field for time values– allows to set a header which will contain time corresponding to a
specific sample.
Horizon Manipulation
The module can convert horizons from picks to headers and another way around, combine horizons,
interpolate and extrapolate then (within ensembles and in space). It can accept both pick objects and
dataset headers at the input. The results of the module operation can be saved as pick objects, or saved
in the specified header of the dataset.
IMPORTANT! In the trace header the module perceives the value of "9999" as a null value. This feature
is used for both recording and reading the headers.
Parameters
Source – select a type of input data:
• Picks the input horizons are read from the pick objects
• Headers - input horizons are read from the headers of the dataset.
Click Add pick… to add pick objects to the list. When you click on the button, a dialog box appears for
selecting a pick from the project database.
If you add two identical items to the list, a warning dialog box appears:
If you add two identical items to the list, a warning dialog box appears:
• Single pick – the resulting horizon will be saved into single pick object as indicated.
Select a database Path to the output pick from a drop-down list. In a field to the right, you can either
select the output pick name or enter manually. The if the output pick does not exist, module will create
it, otherwise the pick will be overwritten.
If input horizons are read from picks, the headers for the output pick are implicitly specified by the
headers of input pick. The headers of all input picks must coincide.
If the input horizons are read from the headers, the header names for the output pick should be explicitly
specified.
If the user indicated an output pick in the list of input picks, then after clicking the OK button a warning
dialog will appear.
• Single header - the resulting horizon will be recorded into a specified dataset header.
The output header is selected from a drop-down list. By default, the PICK2 header is selected. If the
output header is indicated in the list of input headers, a warning will also appear after clicking the OK
button. The output header should not be used by input picks. The dialog box will NOT issue any
warnings, but an error will be generated when the module is started.
IMPORTANT! If a trace contains a point of more than one input horizon, the value of the combined
output horizon on this trace will be equal to the arithmetic average of the values of the combined
horizons.
• Multiple picks - each resulting horizon will be saved into a separate pick object.
The output picks are displayed in the right part of the dialog box. A pick is implicitly added to the table
when adding an input pick or an input header. The output pick table is synchronized with the list of input
horizons (selection and scrolling).
By default, the output picks are created in the same place as the input ones. If the headers are used as
input horizons, the default path will be equal to the first path that is known to the dialog: the path to the
previous pick in the table, or the path to the single output pick, or the path to the first line (or an area, if
there are no lines).
If picks are used as an input, the default names for output picks are formed by adding "_2" to the names
of the input picks. If headers are used as an input, the default names for output picks are formed by
transforming the names of the input headers to lowercase.
If trace headers are used as an input, the list of output picks would contain 2 additional columns: First
field and Second field – here you can specify output pick headers for each output pick individually.
If picks are used as an input, the pick headers for the output picks are taken from the corresponding input
pick, and the First field and the Second field columns are missing in the table.
• Multiple headers - each resulting horizon will be recorded in a separate dataset header.
A header is implicitly added to the list when adding an input pick or an input header. Basically, the list
of output headers is synchronized with the list of input horizons (selection and scrolling). By default, the
output header title is PICK2.
Each element of the Multiple headers, Multiple picks table, can be edited manually by double-clicking
on the appropriate item with the left mouse button. If data inconsistency occurs or the same values appear
as a result of editing the table elements, an error warning dialog box will appear. If output picks are
selected among the input picks, an error will be generated when the module is started. All picks that did
not exist before the module was launched will be created later.
Reference dataset is a dataset from where the input headers are read and/or output headers are written
to. Besides, this dataset is used as a reference for interpolation/extrapolation even if both inputs and
outputs of the module are picks only.
If you are working with pre-stack data, check the Pre-stack check box and specify the following
parameters:
• Trace coordinate is a header that determines the relative coordinate of a trace within ensembles
(for example, OFFSET for a CDP seismogram);
• Ensemble X coordinate is the header containing X coordinates of each ensemble;
• Ensemble Y coordinateis the header containing Y coordinates of each ensemble;
• Ensemble forming headers is an ordered set of headers that defines how the traces are split into
ensembles. The dataset does not need to be sorted by these headers. In case there is only one
ensemble in the dataset the list may be left empty. However, the list should not contain repeats,
nonexistent headers, and should not be longer than 5 (a warning dialog box will appear when
clicking on the OK button).
Post-stack - if the check box is clicked, the specified dataset is considered to be stacked. The following
parameters should be specified to work with such a dataset:
Number of threads is the number of parallel threads to run during the operation of the module. It is
used to speed up the operation of the module on multi-core CPUs. The maximum number of threads
should not exceed the number of cores available – click Set maximum to set it automatically.
2D Header Smoothing
The module performs the averaging of headers by area. At the same time, a weighted averaging
is performed inside the window, the dimensions and rotation angle of which are specified by
the user.
Parameters
Dataset path – datset for which the headers will be averaged.
Azimuth tangent box size – specify the longitudinal size of the area (same direction with azimuth)
within which the header will be averaged.
Azimuth normal box size – specify the transvere size of the area (perpendicular to the azimuth) within
which the header will be averaged.
Alpha trimmed – percentage of points with the highest and lowest values that will not be taken into
account when calculating the average value.
Drop at least edge values – if the number of points in the cell is insufficient to calculate a certain
percentage for rejection, the algorithm will just discard the largest and smallest values.
Replace outlying case with averaged value – change the header value if it exceeds the threshold.
Outlying case threshold – specify the threshold above which the header value will be replaced by the
average value in the cell.
OGP P1/11 Import
The module for import navigation data in OGP P1/11 format.
Parameters
Layout – opens the dialog box where you configure the geometry file parsing mode.
Source – receiver repositioning
The module allows to recalculate coordinates of source/receiver basing on first arrivals. The algorithm
solves a linear system of equations for each bottom station, which results in determination of most
probable position of each station. The module is stand-alone type and should be alone in the processing
flow.
Input:
The dataset with populated X,Y,Z coordinates headers for receivers and sources and a header with first
arrival for each trace.
Output:
The module writes the values of the bottom station coordinates calculated from the first breaks into the
traces headers of the input dataset.
Parameters
Dataset – specify the dataset with the data for which the new receiver coordinates will be calculated.
Minimum offset, [m,s] и Maximum offset –the range of the offsets that will be considered when
constructing the system of linear equations.
Refracted waves –system will be solved assuming that refracted at the water-bottom boundary waves
appears in first breaks.
Direct waves – system will be solved assuming that direct waves appears in first breaks.
Refracted and direct waves – system will be solved assuming that either direct or refracted waves
appears in first breaks depending on offset.
Take into account shelf gradient –take into account the slope of the bottom surface when calculating
coordinates.
Input/Output headers
First break pick – set the header that stores the picking time of the first entry for the trace.
Source X Source Y, Source Elev – set X coordinate of source, Y coordinate of source and Source
depth.
Receiver X, Receiver Y, Receiver Elev – set the X coordinate of the receiver, the Y coordinate of the
receiver and the depth of the receiver.
New Position X, New Position Y – set the headers in which the new receiver coordinates will be written.
Interactive Grid Setup*
The Interactive grid setup* module is designed to work interactively with the 3D data binning grid. It
is possible to create, edit, view, and load a previously created grid.
The module belongs to the stand-alone group, i.e. it must be in the flow alone and does not require input
and output modules.
WARNING! For the module to run correctly, the following headers must be filled out correctly:
REC_X, REC_Y, SOU_X, SOU_Y.
When you run the module for the first time, the default grid is displayed in the right-handed coordinate
system, where inlines are parallel to the Y-axis (inline numbers change along the X-axis), crosslines are
parallel to the X-axis (crossline numbers change along the Y-axis) – this configuration corresponds to
the grid parameters of previous versions of the program (Figure 1).
• Right-handed (default) – the crosslines are rotated clockwise by 90 degrees relative to the inlines
• Left-handed – the crosslines are rotated counterclockwise by 90 degrees relative to the inlines
(Figure 2).
Grid rotation angle (Angle) means an angle of deviation from the vertical axis of the direction that is
chosen to be parallel to the Y-axis when the angle is zero.
Figure 1. An example of a binning grid in the right-handed coordinate system, assuming that the Inline is
parallel to the Y-axis (Inline direction at Angle = 0 – Parallel to Y)
Figure 2. An example of a binning grid in the left-handed coordinate system, assuming that the Inline is parallel
to the Y-axis (Inline direction at Angle = 0 – Parallel to Y)
Let us compare the parameters of the module with those shown in the picture:
Module parameters
The Scheme path field specifies the binning scheme. The scheme is an object of the RadExPro project
database and can be saved at any of its levels. Using the scheme makes it possible to save the current
state of the binning (binning parameters, grids and others). You can create a new scheme or choose from
the existing ones.
The module window is divided into two areas: on the left – the area for setting the binning grid
parameters, on the right – the map in binning mode. By default, a binning grid of 100 Inlines by 100
Crosslines is displayed.
• First inline # – the Inline number from which the numbering in the grid starts;
• Number of inlines – the number of inlines;
• First xline # – the number of the crossline from which the numbering in the grid starts;
• Number of xlines – the number of crosslines;
• Angle – angle of clockwise rotation of the inline (Inline direction at Angle = 0 – Parallel to Y)
in degrees relative to the Y-axis.
The Save grid and Load grid buttons allow you to save the grid parameters as an object in the database
and load a previously saved bin grid in the database.
The module supports reconstruction of the binning grid based on the position of three reference points
(reference points) – Make grid from reference points.
Binning mode
The main window displays the midpoints of the dataset that was loaded in the Input datasets tab, as
well as the binning grid (the positions of source points and receiver points can be added through the
Settings – Source points/Receiver points toolbar). At the top there is a toolbar that makes it possible
to change the scale of an image, measure the linear distance, and edit the binning grid manually using
the mouse.
Tool bar
The toolbar at the top of the screen contains the following buttons (from left to right):
Settings (Settings)
Scaling the map to fit the window size (Fit data to screen)
Setting the same zoom on both axes of the map (Set axis ratio 1:1)
Ruler for measuring distance on a map and determining azimuth (Measure distance and azimuth)
Enable/disable binning grid editing in interactive mode using the mouse. The following operations
are possible in this mode:
In addition, right-clicking in the map area opens a context menu with the following commands:
• Export to text file – exporting the values of all the headers associated with a given map to a text
file (for example, when exporting to a cdps text file, the headers CDP_X, CDP_Y, XLINE_NO,
ILINE_NO, CDP will be saved)
Changing the image scale
The map is scaled with the scaling frame. Click the Zoom button on the toolbar and left-click on
the rectangular area of the map that you want to enlarge to the size of the entire window. You can zoom
out the map using the Unzoom button In addition, you can zoom in/out the map elements using the
mouse wheel when its cursor is in the map area. The "Fit data to screen" button scales the map to
the window size.
Parameters
Press the Settings button to adjust the image settings. In addition, the parameters dialog box can be
opened by double-clicking on the axes – in this case it will open up immediately on the corresponding
section.
The window parameters are grouped by sections. The window parameters dialog is divided into two
parts – at the top it shows a list of sections, at the bottom it shows the parameters of the selected section.
The checkboxes to the left of the section names control the visibility of the corresponding map elements
or option activity.
Source points, Receiver points, CMP points, Reference points, Left scale, Bottom scale,
Right scale, Top scale sections
The parameters of these sections are similar to those presented in the Interactive QC module (see
Interactive QC* for details)
Grid section
All the changes made to the binning grid parameters in interactive mode are also displayed in the fields
of the Grid Settings dialog box.
1. Use Geometry Spreadsheet to open a table with datasheet header information. Let us choose
three arbitrary points corresponding to the extreme values of the grid:
CDP_X CDP_Y ILINE_NO XLINE_NO
1. 510318.5 7379729.9 1 15
2. 511250.9 7379883.9 235 836
3. 511501.3 7379298.5 4 1274
2. Let us load the dataset, in which we want to restore the grid, into the module using Add... on the
Input datasets tab.
Loading dataset 010_raw_data into the Interactive Grid Setup module
NOTE: You can track the loading process in the information panel at the bottom.
3. Let us enter the values of CDP_X, CDP_Y, Inline_No, Xline_No of the three points from the
first step into the table of reference points.
Loaded dataset with reference points
In our example, we took three different points, recorded their coordinate values and the Inline and
Crossline numbers in the center of which these reference points will be located.
4. To build a grid from reference points, press the Make grid from reference points key.
We see that the reference points are located in the center of the cells (and correspond to the selected
parameter in the Grid Origin section) and the grid is located in the left-handed coordinate system.
5. After creating the binning grid, click Save Grid to save the created grid and later use it in the 3D
CDP Binning flow module
Header Statistics
The module is designed to analyze the values of the headers going through the flow. The results of the
analysis are output in the log or text files specified by the user.
Operating principle:
The module accepts traces at the input and analyzes values of the header specified by the user.
Module parameters
Headers – headers which values will be analyzed.
Output statistics to – the place where the result of the module's operation will be output:
Note:
• If the header contain traces with the values inf, NaN or HeaderNoValue, then they will not be
taken into account in the analysis of such parameters as Min value, Max value, Abs min inc
and Abs max inc (CHAN_NAN, FFID_HNV)
• If all traces have the values inf, NaN or HeaderNoValue, then this value will be specified as the
Min value and Max value. In the parameters Abs min inc and Abs max inc there will be a dash,
in other parameters there will be calculated statistics (PICK1, PICK2, STAT1, STAT2).
Interactive Tools
Screen Display
The Screen Display module is a basic instrument to view and interactively analyze the data in the
RadExPro program. The module works with data in the flow.
IMPORTANT! The data should contain the following filled headers: DT (sampling interval) and
NUMSMP (number of samples per line). If one of these headers is zeroed, the module will stop working
with error. However, these headers must be filled before entering the data into the flow – to change their
values within the flow does not affect the operation of the module.
The module interface consists of two parts: the initial setup screen settings and the main working window
at runtime. The dialogue box with settings can be opened at runtime, but the user’s changes will be active
only during the current session of the module.
When the user adds a module to the flow, a dialogue box with main module parameters Display
Parameters is opened:
Main Parameters
The right part of the dialogue box configures the traces drawing parameters: methods of drawing,
colour, gain. The left part is intended for the drawing the scales and additional options.
Drawing Parameters
Traces can be displayed both in the variable density display mode (Variable density display mode) (in
colours) and in the wiggle trace and variable amplitude display mode (WT/VA display mode). In
addition, the variable density display mode can display velocity sections (sum of CDP or interval) instead
of the traces.
The both display modes can be used separately or simultaneously. In the latter case, the traces in wiggle
trace and variable amplit display mode will be shown on top of the colour display of the same traces (or
the velocity sections) in the variable density display mode.
WT/VA display mode – parameter group displayed in the wiggle trace or/and variable amplitude
display mode.
By default, the wiggle trace and variable amplitude display mode shows all the traces in sequence. When
viewing a large number of traces on a small scale at the same, such an image may not be informative –
adjacent traces can merge. Additional parameter Show every N-th trace allows to reduce the number
of traces on the screen showing not every first, but every N-th trace (e.g. every fifth).
Variable density display mode – group of parameters for displaying in variable density display mode
with the selected palette.
The palette is defined as a set of points with specified colour. Colours are linearly interpolated between
the specified points. You can create, move or delete points, change the colours of the points. Points are
located under the image panel in the greyscale bar (white rectangles).
• To change the position of a point, drag and drop the point with the left mouse button (MB1).
The view of palette will change while moving.
• To change the colour of an existing point, double-click on it with the left mouse button (MB1).
The standard colour dialogue box will appear.
• To create a new point with the specified colour click on the palette with the left mouse button
while holding down the Shift key (Shift + MB1). The standard colour points dialogue box will
appear. A colour point will be added to the specified place of the palette.
• To remove a point from the palette, click on the point with the right button of the mouse (MB2).
• Load palette... button opens the standard Windows File Open to select RGB ASCII text file on
disk.
• Save palette... The button opens a standard Windows file saving the current palette as a RGB
ASCII text file on disk.
An additional group of parameters Data / Velocity allows choosing the items to be displayed in variable
density display mode:
If the variable density display mode is used to show a velocity sections, you can use the button for
selection of the velocity model (Set velocity) and the fields for setting the appropriate values of the
velocity model and colour palette borders:
• Min. Vel (m/s) – the minimum velocity corresponding to the right part of the palette,
• Max.Vel (m/s) – maximum velocity corresponding to its left part
When user clicks on Set velocity, the dialogue box of velocity model selection appears:
The dialogue is similar to the Velocities tab of the NMO/NMI module. You can enter the velocity model
manually (single velocity function), read it from a file (Use file) or from the project database (Database
picks). Velocities can be summary CDP velocity (RMS) or interval ones (Interval).
The following parameters for the wiggle trace and variable amplitude display mode and the variable
density display mode are set independently.
Screen gain – screen magnification. Additional factor for multiplication of the trace samples before
displaying.
Bias – average offset of the trace level with respect to zero, in percent. In case of wiggle trace and
variable amplitude display mode, changing the parameter modifies the start level of the black filling of
the positive deviation. A positive value will produce a shift to the left from zero line of the trace with
increasing the black-filled area of the curve. A negative value corresponds to a decrease of the black-
filled area of the curve. When displaying the data in variable density display mode, this value will result
in a shift of zero palette centre.
Options:
Scale mapping and additional parameters are set in the left side of the dialogue box:
• From t = Initial time display in milliseconds. Time sections will be displayed on the screen, from
the current time (all data can be viewed using the vertical scroll bar).
• t0 – The final time displayed in ms. Time sections will be displayed until the current time. To
display all the samples until the end of the trace, enter 0. This option is available if the option
tScale is disabled. (The whole dataset can be viewed using the vertical scroll bar).
• Number of traces – A horizontal scale to display data: number of traces displayed
simultaneously on the screen. This option is available if the option Xscale is disabled.
• TScale. If this option is enabled, the data will be displayed on the vertical scale explicitly
specified by the user. Specify in the right field the vertical scale in ms / cm.
• Xscale. If this option is enabled, the data will be displayed on the horizontal scale explicitly
specified by the user. Specify in the right field the horizontal scale in ms / cm.
• Ensemble boundaries. If this option is selected, different trace ensembles will be intermittently
separated on the screen. The trace ensembles are defined in the module of data input in the flow
(e.g., in the Trace Input module ensembles are formed in virtue of the selected number of the
first sort fields).
• Variable spacing. If this option is enabled, the distance between adjacent traces on the screen
will be variable, proportional with the specified header value increase. To select the header to
use for calculation of the distance between the traces, press the button field. For example, using
this option one can arrange the traces of a stacked section according to their real-world
coordinates along the profile.
• Ensemble's gap. The width of the gap between trace ensembles (in traces). The field is available
if the option Ensemble boundaries is selected.
• Multiple panels. Enable this option to display data on the screen in a few, arranged one above
the other panels. The number of panels is entered into the field on the right.
• Space to maximum ensemble width. This option is only available if both options Multiple
panels and Variable spacing are selected. If it is available while the user views the data in
several panels in the mode of variable distance between the traces, the scale used to arrange the
traces in each panel will be selected based on the maximum range of values of the selected
header found in the data. For example, if the distance between the traces is determined based
on the point number, then, if the option is selected, the traces of different seismograms, but
having the same RP number, will be displayed in different panels exactly one above the other.
• Use excursion ___ traces. This option is used to limit the maximum deviation when drawing
traces in WT, VA or WT/VA modes. Specify the maximum allowable deviation of the traces. If
this option is selected while drawing the traces in places where the amplitude exceeds the
maximum deviation, the amplitudes of the screen will be clipped.
• Axis... This button opens the dialogue for setting the axis parameters (see Setting the axes)
• Plot headers... This button opens a dialogue box Header plot to set the parameters of
displaying the plots of values for the selected header fields (see Displaying the headers plots).
• Header mark ... This button opens the dialogue Header mark for settings the parameters of
the header marks (see Displaying the marks)
• Show headers... This button opens the dialogue Quick show header fields to select the header
field values that will be displayed in the status bar (see Displaying the headers in the status
bar)
• Picks / Polygons settings – this button opens the dialogue for setting parameters of the picks
and polygons for quality control attributes in the transition to the next frame when the flow in
performed in the frame mode (Framed mode).
• Autoload picks opened on previous frame – automatically load the picks opened within the
previous frame.
• Autosave opened picks without notification – automatically save the opened picks when
exiting the module without notice.
• Autoload polygons opened on previous frame – automatically download the polygons for the
quality control attributes opened in the previous frame.
• Autosave opened polygons without notification – automatically save the opened polygons
for the quality control attributes while exiting the module without notice.
• Save template / Load template – these buttons allow saving the current module settings as a
template in the project database, or load settings from a previously saved template,
respectively.
Setting the axes
The dialogue box for setting the axes parameters is opened by clicking Axis ... button in the main
Time– this field specifies the time intervals between major (thick) and secondary (thin) lines of the
time grid. For example, to display the thick lines at every 1000 ms and thin lines at every100 ms, enter
1000.0 in the Primary lines and 100.0 in the Secondary lines field
Values. If this option is selected, the line of the appropriate type will be labelled in the vertical scale of
the time section on the left.
Traces. This group of fields sets the signatures of horizontal scale parameters – labels of the traces. To
select the header which value will be used as a label line, click on the field. One can use two trace
label configuring each of them independently. The user can set the frequency of the trace lebelling, i.e.
intervals between the trace labels in the horizontal scale. The options are:
• Different labels the first trace and each next trace with a header value different from the
previous ones.
• Interval labels the first trace and every N-th trace after it. Set the desired interval N in the dx
field. For example, if you put 2 in the dx field, the traces 1,3,5,7, etc will be labelled.
• Multiple labels the trace if the serial number of the header is divisible by the chosen increment
set in the dx field. For example, if you put 5 in the dx field, the label will be added to all traces
with header numbers equal to 5, 10, 15, 20, etc.
• Values. Select this option to display the traces labels on the screen.
The size of the font to display the labels on the screen is set in the Font size field.
Margins. Here user can set the margins of the left and top fields of the time section display on the
screen. These fields will be used for the labels of axes and traces. Enter the field size in mm.
When launching a flow that contains the Screen Display module, a window like the one shown in the
following picture is opened:
The parameters of the data display and the scales are configured in the main parameters dialogue of
the module. During the module operation, they can be changed by calling the same dialogue with the
help of the Common parameters... menu command.
In addition, a number of predefined ways to display the data can be included during the module operation
with the use of the following keyboard shortcuts:
Ctrl +1 – the wiggle trace and variable amplitude display mode (WT/VA);
Ctrl +5 – the display mode of variable density in the red-white-and-blue palette (R/B);
Ctrl +6 – the display mode of variable density in the user-defined (Custom) palette (the palette is
adjusted by means of the Define button in the main parameters dialogue).
The axle configuration during the module operation can be changed with the use of the button in
the toolbar.
The title bar shows the name of the project / area / profile / processing flow, from which the current
instance of the module was launched. At the end of the title bar, the time stamp is added which
corresponds to the date on which the current module window was opened. The timestamps can be used
as the guidelines during the testing of the processing parameters of the same flow by storing and
recording the sequence of the parameters and, consequently, by comparing them to the results on the
screen. Also, when testing the parameters, it is comfortable to view the flow history.
The module permits to view the flow history – i.e. the procedures and the parameters that are applied to
the data that are displayed in the current instance of the module. This is useful when testing different
processing parameters in the same flow in order to correctly identify the result of the application of
certain procedures.
In order to view the Flow History window, choose the View / History menu command. This will open
the Flow History window, which contains the snapshot of the flow at the time of its launching with the
current instance of the Screen Display module:
By double clicking on the procedure in the history window, you can view its properties that were
exposed at the time of the launching of the flow.
Zooming (temporary increase of the picture fragment)
The module allows you to temporarily change the scale of the image, to increase the selected rectangular
piece of data to the size of the full window. To do this, click on the button in the toolbar (or press
the Z key on the keyboard or use the Zoom / Set zoom menu command). Then select the fragment to be
increased with the mouse: click the left button (MB1) in one corner of the fragment; move the cursor to
the opposite corner of the fragment while holding the button down, and release the button. The selected
rectangular piece of data will be increased to fit the full window.
To return to the original size that is specified in the main parameters dialogue, click on the button
in the toolbar (or press X on the keyboard or click the Zoom / Unzoom menu command).
You can synchronize the display of several Screen Display windows in order to compare the data before
and after some processing, “binding” them together. All bound Screen Displays will be of the same size,
and they will synchronize data display parameters, zoom and scroll.
To synchronize another Screen Display window with the current one, click the button in the
current window, and holding the mouse button pressed, drag the target symbol to another Screen Display
window that you want to synchronize with the current one. After that, the two windows will be bound
together, so that changing display in one of them will be reflected in another one in exactly the same
way. This way you can bind as many Screen Display windows as you wish.
The module permits to estimate velocities of the reflected waves on the pre-stack seismograms, of
diffracted waves on stacked sections and t0 sections, as well as the apparent velocity by means of the
interactive fit of the observed events by theoretical travel-time curves.
The module permits to determine the effective velocity and the depth of a diffracting object on the basis
of the travel-time curve (hodograph) of the diffracted wave for a stacked seismic section or the t0 section.
Click the button in the toolbar or use the Tools / Approximate / Hyperbola (difraction) menu
command. The screen will show the hyperbole: the theoretically calculated travel-time curve
(hodograph) of the diffracted wave. In calculating the hodograph coordinates, the traces recorded in the
CDP_X and CDP_Y fields will be used. In order to determine the parameters of an observed diffracted
wave, apply to it a theoretical hyperbole and achieve their best matching.
Use the left and the right mouse buttons to control the theoretical hyperbola. The top of the screen
displays the line that shows the values of the effective velocity and the depth of the diffracting object.
In order to determine the effective velocity for the hodographs of the reflected waves based on pre-stack
data, click the button in the toolbar or select the Tools / Approximate / Hyperbola (reflection)
menu command.
The screen will then display the theoretically calculated hodograph of the reflected wave. In the course
of the calculation of the hodographs the offset values (in meters) are used, which must be recorded in
the OFFSET header field. In order to determine the parameters of an observed reflected wave, achieve
the maximum match of the theoretically calculated hyperbole to the observed one. The top of the screen
will display the line that shows the current settings of the theoretical hodograph.
Ranges and steps of the search for the parameters of the reflected wave hodograph can be set using the
Tools / Approximate / Hyp. (Reflection) parameters menu command. The dialogue box will appear
in which the hyperbola parameters must be set: the step and the ranges of vertical time (t0 step, min,
max), the effective velocity (v step, min, max) and the sloping angle of the reflecting border (fi step,
min, max). The sloping angle of the border is specified in degrees; the positive values correspond to the
slope of the border in the direction of the offset increase. After setting the parameters, click OK.
Determination of the Apparent Velocities
To estimate the apparent velocities as the ratio of the increment of the distance to the increment of the
time, click the button in the toolbar or use the Tools / Approximate / Line menu command.
The screen will display a line segment, the beginning and the end of which can be set using the left and
the right mouse buttons, respectively. The top of the screen will display a line that shows the value of
the apparent velocity that corresponds to the current slope of the segment.
In the course of the calculation of the apparent velocity, the distance values are taken from the header
that can be selected using the Tools / Approximate / Line header word... menu command (by default,
the distance is taken from the OFFSET header).
If this Tools / Approximate / Save parameters menu command is used when one of the tools of the
velocity estimation is active, the window with an editable text field will appear on the screen. Pressing
the Ctrl + Q key combination will lead to copying of the current parameters of the used tool as a string
into the text box of the window. Further, arbitrary text comments can be added to them. The contents of
the window can be saved to a text file.
Amplitude Spectrum
In order to view the average amplitude spectrum of a trace ensemble in a given time window, click on
the button in the toolbar or select the Tools / Spectrum/Average menu command. Then, select a
rectangular piece of data using the mouse, which will be used to calculate the spectrum (the window is
configured in the same way as in zooming). The window with the display of the amplitude spectrum is
then opened:
By default, the X-axis shows the frequency in Hz from 0 to the Nyquist frequency.
Along the Y-axis, the values of the amplitude spectrum can be specified either as a percentage of the
maximum amplitude of the spectrum (% Amplitude) or as dB (dB Amlitude) or as true raw amplitude
(Raw Amplitude). In order to switch between the modes, use the Scale / Scale type menu command.
When the display in dB is selected (Scale / Scale type / dB Amplitude), the absolute value of the
spectrum amplitude, with respect to which the values in dB are calculated (Reference Amplitude), can
be set in the dialogue box opened via the Parameters command menu:
When a negative value is selected as the Reference Amplitude, the dB values will be calculated with
relation to the maximum absolute value of the amplitude spectrum.
In the same dialogue, you can select the colour that will be used to display the spectrum plot. To do this,
click on the coloured Plot colour rectangle.
The range of the spectrum plot fragment can be increased along either axis to fit the entire spectrum
window. To do this, select the corresponding range along one of the axes with the mouse: move the
cursor to the beginning of the range on the axis, click the left mouse button and move the cursor to the
end of the range while holding the button down. Then release the mouse button:
In order to return to the original size along one of the axes, double-click the axis with the left mouse
button.
The values of the average amplitude spectrum in the selected scale can be saved to a text file with the
use of the File / Save... menu command.
You can add additional spectrum plots to the same window. For that, click the File/Add spectrum menu
item and repeat selections of an area for spectrum calculation. Additional spectrum will appear in the
same window:
The areas where the spectra were taken from will be marked in the Screen Display window with
rectangles of the corresponding colors:
At the right of the spectrum window there is a list with the curve names and colors. Clicking on a curve
name you can change it (the same way as renaming a file in Windows Explorer). Clicking on the colored
box near the name you can change the curve color:
When the spectra are displayed in % or in dB referred to maximum amplitude, each curve is scaled
separately, as shown on the above figure. When the true amplitude display is selected or dB are referred
to a fixed amplitude value, the curves are displayed in a common scale so that their relative amplitudes
can be compared.The same spectra as above displayed in true amplitudes:
For the purposes of review and analysis of the two-dimensional (FK) spectrum of the trace ensemble in
the given time window, click the button in the toolbar or use the Tools / Spectrum / 2D
Spectrum menu command. Then, select a rectangular piece of data using the mouse, which will be used
to calculate the 2-D spectrum (the window is configured in the same way as in zooming). This will open
the F-K analyze two-dimensional spectrum analysis window:
The window is divided into two parts – the left one shows a piece of data for which the two-dimensional
spectrum was calculated (the TX field), the right one – the actual F-K spectrum (the FK field). The
parameter panel is located in the right section.
The TX field along the vertical axis represents the two-way trip time in ms; the field along the horizontal
axis, the distance from the beginning (left side) of the fragment in meters. The step between the adjacent
traces in the fragment is permanent – it can be set in the Distance between traces section of the options
bar or explicitly (the Manual DX option) or by specifying a pair of headers that will be used for its
calculations (First header and Second header). In the course of the header-based calculation, the
permanent step is set as the median value of the actual distances between the adjacent traces within the
fragment.
The FK field on the vertical axis represents the F frequency expressed in Hz (0 to the Nyquist frequency).
The horizontal axis represents the K spatial frequency expressed in 1/m
The screen gain of each of the fields can be configured independently in the Gain section of the toolbar
with the use of the FK and TX gain knobs. In addition, the colour palette can be selected and customized
for each field using the Parameters / FK Palette and Parameters / TX Palette menu commands.
The arbitrary rectangular fragment of the TX or FK field can be zoomed to the size of the corresponding
panel. To do this, click on the button in the toolbar of the F-K analyze window and select the
fragment to be increased with the left mouse button by pressing and holding it. The incremental reduction
of the display scale of both fields is achieved by pressing the button. In order to quickly return the
The mutual correspondence of the TX and FK fields and the apparent velocity of the observed seismic
events can be analyzed with the use of the Ruler tool.
Click the button and control the «ruler» that appears on the screen with the help of the left and the
right mouse button.
In this case, the status bar will show the apparent velocity dx/dt (in m/s) which corresponds to the
current slope of the «ruler».
HINT: The values of the apparent velocities can be used in the F-K filter module (F-K Filter) to set the
parameters of the fan filter (in the Fan mode).
The arbitrary areas of the F-K filtering can be interactively defined and tested in the two-dimensional
spectrum analysis window. The areas can be saved in the project database and then used as filters in
the F-K filter module (FK Filter) in the Polygon mode.
In order to create the F-K filtering area, click on the button in the toolbar. The area is defined in
the FK field as an arbitrary polygon. The polygon nodes are added by clicking the left mouse button.
Right-click the mouse to drag the existing node to a new location. To remove the node, double-click
the right mouse button. In order to delete the entire area, click on the button in the toolbar.
If any other control is clicked during the area editing, the program will exit the edit mode. In order to
return to this mode, click the button again or use the Pick / Edit menu command.
After the filtration area is set, it can be tested by selecting the filter mode (Filter mode) in the options
bar: Reject or Pass. Click Preview to preview the results of the filter application to the selected piece
of data (the TX field). In order to return to the original image, click Undo.
The image below shows the results of the application of the same filtering area to the data in the Reject
and the Pass modes:
After the filtering area is set, and if you are satisfied with the preliminary results of its application, save
it to the project database by selecting the Pick / Save polygon menu command. Then, the saved area can
be saved to the Parameters dialogue for the F-K Filter module and used to filter the data in the flow.
The previously saved filtering area can be loaded into the two-dimensional spectrum analysis window.
To do so, use the Pick / Load polygon menu command.
NOTE: you can create/load more than one polygon in the F-K Analyze tool of the Screen Display
module and preview their simultaneous filtering result.
Working with the Headers
Click the button in the toolbar; then click the left mouse button on the arbitrary trace on the
screen. The Headers Display window with the table of values of all the header fields for the selected
trace will be opened:
In the Main Parameters dialogue, click the Plot headers... button. This will open the dialogue for the
configuration of the plots of header field values:
General parameters. This group includes the common parameters that are the same for all the
curves:
• Plot headers. Enable this option to output the plots of the header values to the screen of
the Screen Display module.
• Time scale. Enable this option to interpret the header values as the time values and to
place them at the existing time scale. For example, this option can be used to print the
real-scale static offset values.
• Fill background. This option is used to display the plots against a solid-colour
background. Click on the coloured square on the right to select the background colour.
Curves to plot. The headers to be displayed in the form of the plots can be specified here:
• Add. Use this button to select the header fields to be displayed. Use the button to open
the list of available headers for the current project. Multiple headers in the list can be
selected with the left mouse button and Shift and Ctrl keys. After selecting the headers,
click OK. The selected headers will appear in the list.
• Remove. This button deletes the selected header fields from the list of the displayed
plots. Multiple headers in the list can be selected with the left mouse button and the
Shift and Ctrl keys. In addition, there exists the option to plot interactive statics:
• Current statics. Select this option if you want to display the time shifts that are made
in the current panel but have not been yet applied, in the form of a plot.
• Applied statics. Select this option if you want to plot the time shifts that have already
been applied to the data in the panel. These changes do not affect the Current Statics.
• Total statics. Select this option if you want to plot the total time shifts, i.e. the sum of
Current Statics and Applied Statics.
Curve parameters. Here, the individual display parameters for each of the header fields in the
list are configured. Left-click on one of the selected header fields in the list to view and edit the
settings for its display.
• Colour. Here the colour of the display of the selected header fields can be set. Click on
the coloured square to set a new colour.
• The underlying fields specify the scale of plotting. They become available if the Time
scale option is deselected.
• Plot area position (%). Enter the downward offset value for the header plot as the
percentage of the screen size. If this field is set to 0, the plot will be displayed directly
above the time section.
• Plot area width (mm). The height of the area (in mm) intended for the output of the
plot of the values of the selected header field.
• Whole range. When this option is enabled, the whole range of the header values will be
scaled to fit entirely to the specified display area. Otherwise, the range of values to be
displayed must be set manually by specifying its minimum (Min scale value) and
maximum (Max scale value) values.
• Show scale. This option permits to display the axis with the scale bar for the selected
plot. If it is enabled, the axis location can be set as the percentage of the width of the
screen (0 for the left-hand section; 100 for the right-hand one) in the Scale Position
field.
• Value marks orientation is used to specify the position of the mark relative to the
scale axis – to the left (Left) or to the right (Right).
• Autoscale. When this option is enabled, the interval between the scale marks on the
axis is selected automatically. Otherwise, set the desired interval in the Mark distance
field.
After the plotting parameters of one of the header fields in the list have been configured, the next field
in the list can be accessed by clicking on it with the left mouse button.
In this case, the parameters set for the previous field will be saved.
After setting the parameters, click OK. The values of the selected header fields will be plotted according
to the selected parameters.
Click the Header mark... button in the Main Parameters dialogue The Header mark dialogue box
will then be opened, in which the header marks can be set:
• Field.... Click this button to select the header that will be assigned to this type of mark. Marks
associated at the same time with three headers can be used. Select the header word from the
list; then click on the square to select the colour of the marks. As a result, all the traces with the
changing values of the selected header will be marked by vertical lines in the corresponding
colour.
• Clear: this button removes the selected header field from the Field ... section and cancels the
associated marks.
Click the Show headers... button in the Main Parameters dialogue. This will open the Quick show
header fields dialogue box where the header fields can be selected, the values of which will be
displayed in the status bar.
• Add... Click this button to select the title, the values of which must be displayed in the status
bar of the list of the header fields. The header fields to be displayed will be included in the list
in the left part of the dialogue box.
• Delete. Click to delete the selected header from the list of the headers selected to be displayed.
• Quick show dataset name. If this option is enabled, the status bar will display the name of the
displayed dataset, on which the cursor is positioned.
After setting the parameters, click OK. The values of the selected header fields of the traces that the
cursor is placed on will be displayed in the right-hand side of the status line of the working window of
the Screen Display module.
The Tools / Trace Header Math... menu command is used to open the window of the mathematical
operations module with the headers Trace Header Math (see the description of the module in the
relevant section). The result of the editing will be applied to the trace headers in the flow (the current
frame of the flow).
In order to enter the pick editing mode, click on the button in the toolbar. This will open an
additional dialogue box with the list of picks. At the same time, if no picks have been created before, a
new pick will be created automatically and its name will appear in the list:
List Window Commands
- + and – buttons are used to create a new pick and to remove the current one, which is selected in the
list.
- In order to select an active pick, click on its name in the list with the left mouse button. It is also
possible to switch between the picks by pressing the Tab key (down the list) and the Shift + Tab key
combination (up the list).
- Left-click on the colored square to the left of the name of the pick to open the standard object color
selection dialogue.
- The square with a check mark to the left of the name of the pick determines its appearance on the
screen.
- Right-clicking on the name of a pick opens the context menu.
While the list window is open, the picks can be edited on the screen. In order to exit the edit mode,
close the list of the picks (or release the button in the toolbar). The current picks will remain on
the screen. In order to continue the handling of the picks, press the button again.
Once created, a pick can be edited on the screen with the use of the mouse and keyboard.
- In order to add a node to the pick, left-click on the screen. In this case, depending on the selected
pick mode, one or more nodes will be added (see Pick Parameters).
- In order to move the node, drag and drop it with the right mouse button to the new location. You
can also right-click on the new location – this will cause the nearest node to move to the cursor
position.
- In order to move all the picks up and down, grab any of the pick nodes with the right button, hold
down the Shift key, and drag the entire pick to the new location.
- In order to remove a single node, double-click on it with the right mouse button (the range of the
nodes is deleted in the Eraser mode – refer to Pick Parameters).
- In order to remove all the picks, press the Delete key on the keyboard or click the button in
the list window; this will delete the selected pick.
- In order to add a new pick, press the N key on the keyboard or click on the button in the list
window.
- The current pick can be smoothed by pressing the O keyboard key (or by using the Smooth
command of the Tools / Pick / menu or of the pick context menu). The size of the sliding smoothing
window is set in the pick parameters. In case Ensemble boundaries option is on, smoothing will
be done inside each ensemble. Otherwise, all pick points will be considered as a single pick and
smoothed correspondingly.
All of the above commands in the Pick Edit mode can be cancelled (Undo) or repeated (Redo).
- In order to undo the last edit command, click on the button in the toolbar or press the Ctrl
+ Z key combination on the keyboard. The rollback depth is up to 10 steps.
- In order to repeat the last undone command, click on the button in the toolbar or press the
Ctrl + Y key combination on the keyboard.
The pick parameters determine how the pick nodes will be added. This section is also used to set the
size of the window of the pick smoothing. In addition, the display of the current picks can be
configured here.
In order to open the pick settings dialogue, press the Latin «A» key in the pick edit mode. The same
dialogue can be accessed via the Tools / Pick / Picking parameters... menu. You can also choose a
similar command in the context menu of the pick that appears when right-clicking the pick name in the
list.
Mode – the selection of the pick mode:
• Manual – the manual pick filling without interpolation (U hot key). The pick node adding
command adds only one node in the cursor position.
• Auto-Fill – the pick filling with the tracking of the phase between the neighboring nodes (F
hot key). On the command of the new node adding, the nodes in all the traces between the
previous node and the current cursor position are added as well. If possible, the nodes are
added along the selected phase. (The phase to be traced is selected in the Parameters field
below.)
• Linear Fill – the pick fill with the linear interpolation between the neighboring nodes (L hot
key). On the command of the new node adding, the nodes on all the traces between the
previous node and the current cursor position are added as well. The nodes are arranged
along a straight line.
• Draw – (D hot key) move the mouse keeping the left button pressed. The pick nodes will be
added to every trace, follow the cursor.
• Draw along phase – (G hot key) The same as Draw but the nodes will snap to the nearest
phase (the type of phase to be traced is selected in the Parameters field below) within the
Guide window around the cursor.
• Eraser – the eraser mode is used to delete a range of pick nodes. To switch to this mode,
from any other mode, press and hold E key. Left-click and hold the mouse button at the
beginning of the range to be deleted, then move the cursor to the end of the range, and finally
release the mouse button. All the nodes that fall into the selected range will be deleted. After
you release E key, the software returns to the mode that was used before.
• Hunt – the automatic tracing of the horizon (T hot key). On the command of adding a new
node, the program will add new nodes to each trace to one or both sides of the new node for
as long as the horizon can be traced. (The phase to be traced is selected in the Parameters
field below.)
During the phase tracking, the position of the pick node in the current trace is calculated as an offset
relative to the node on the previous trace which is defined by the maximum of the normalized cross-
correlation function between the current and the previous traces within the specified correlation window
(Correlation window).
The automatic pick filling will be terminated at the end of the traceable event if the program cannot trace
it any further, i.e. when the cross-correlation function within the given correlation window has no defined
maximum or when the maximum is located outside of the tracing window (Guide window) or when the
quality of the correlation does not comply with the pre-set criteria (Correlation test).
Hunt options: the automatic tracing mode parameters can be configured here (the parameters are
available only if the Hunt mode is selected):
• Correlation window – the length of the window of the cross-correlation function calculation
(in ms).
• Correlation Test – this option is used to check the quality of the cross-correlation of the
current and the previous traces. If the maximum of the normalized cross-correlation function
value is below the Halt threshold value, the automatic tracing will be terminated. The Halt
threshold parameter can possess the values from 0 (no correlation) to 1 (perfect correlation).
• Hunt direction – the following three buttons determine the direction of the tracing: << –
only leftward; <> – both ways; or >> – only rightward.
• Show hunt direction window – if this flag is activated, the buttons of the tracking direction
assignment will be available during the interactive work with picks in the Hunt mode. The
buttons will be displayed in the special floating panel over the main module window:
Parameters: the exact phase that will be seen in the pick modes of Auto-fill and Hunt can be selected
here (the parameters are available only if these modes are selected).
• Peak – the local maximum of the signal values,
• Through – the local minimum of the signal values (i.e. the largest negative values in the
module),
• Zero: Neg2Pos – the signal zero-crossing from the negative values to the positive ones,
• Zero: Pos2Neg – the signal zero-crossing from the positive values to the negative ones.
• The Guide window length parameter specifies the length of the window in milliseconds
that will be used to search for the selected phase (i.e. the local maximum, minimum etc.).
In fact, this parameter specifies the maximum shift of the current pick point from the
previous one.
• Smoothing – the smoothing parameter of the pick. The Window length___points field
indicates the number of the pick nodes that is used to smooth out with the use of the sliding
window when the Smooth command is selected (see Editing the Picks).
• The Line style button opens the display window of the pick line:
• You can select the line style (Solid, Dashed, Dotted), width (Line width) and colour
(Colour). The Draw cross-marks at nodes option monitors the appearance of the icons in
the pick nodes (crosses for the active one and circles for the others). If it is off, the icons in
the nodes will not be displayed.
• Marks only – if this option is enabled, the pick is displayed as vertical marks in the traces
where its nodes are located (such a display can be useful when the pick filling specifies the
set of traces for their subsequent rejection in the Trace Editing module).
The Pick Headers
Picks in RadExPro is a 3-column table. The first two columns are the values of 2 header fields that are
used to attach the picks to the trace. These headers must permit the correct identification of the trace
that hosts the pick node. For example, depending on the situation, they can be: FFID:CHAN,
CDP:OFFSET, or ILINE_NO:XLINE_NO. The third column is, in fact, the pick value, that is the time
of the location of the pick node on the corresponding trace.
If only one header is sufficient in order to determine the trace (e.g., in the stacked section), or if the
pick nodes must be assigned to all the traces with the same header value (e.g., for the entire
seismogram), both pick headers must be specify as identical (e.g., CDP:CDP).
Before the picks are saved or exported, it is always necessary to check that the current pick headers
comply with the tasks at hand and with the peculiarity of the existing geometry. For example, if the
pick is set for the top muting of unadded SRT1 CDP data and is at the same time set individually for
each CSP seismogram, the following header pairs can be required: FFID:OFFSET or SOURCE:
OFFSET. If the same pick for the top muting of unadded SRT CDP data must be the same for all SPs,
i.e. the muting time must depend only on the offset, then the following titles must be used:
OFFSET:OFFSET. If the pick needs to be set for the first onsets to be processed by the MPV method,
the headers must be the linear coordinates of the source and the receiver – SOU_X: REC_X.
Notice that the use of headers with the floating point value (for example, any coordinate headers) while
saving the pick and loading it into the section can lead to the incorrect display of the picks. This is due
to the rounding-off errors – the value in the picks table can be different from the same value in the
trace header, for example, in the 10th decimal place, which will lead to the fact that the program cannot
find the required trace. In such cases, when saving the pick, you are advised to tie to pick to the integer
values (for example, instead of using the CDP_X coordinates, use the CDP point number). If the use of
a floating-point header is necessary and re-loaded pick is not displayed correctly, interpolated loading
of the picks can be used (see Download / Import of the Picks).
In order to select the pick headers in the edit mode, use the Tools / Pick / Pick headers... menu
command and select the required fields in the dialogue box with two lists of headers that opens:
The same dialogue box can be invoked directly before saving the pick into the project database or into
the header of the dataset. To do this, click the Pick headers... button in the appropriate dialogue boxes
(see Saving / Export of the Picks).
By marking a pick in one seismogram, it is possible to project it to all the other seismograms of the flow
(or of the current frame of the flow) later on. In this case, the pick nodes will appear at each trace of each
seismogram within the range of the offsets (the OFFSET field values) to which it was originally assigned.
Between the source nodes, the pick values are interpolated linearly from the values of the OFFSET field
of each trace.
For the projection of the pick, create a new pick, set it in one of the seismograms (with the maximum
range of offsets); then select the Project command from the pick context menu (or the Tools / Pick /
Menu).
The pick can be saved in the project database, in the header of the traces flow, as well as in the header
of the data predefined in the project. In addition, it can be exported to a text (ASCII) file.
In order to save the pick in the database, right-click on its name in the list and select one of the two
menu commands – Save (to be saved under the same name) or Save As... (to be saved under a new
name). These commands are also available via the Tools / Pick /... menu. In addition, the Save As...
command can be invoked via the hotkey Ctrl + S or with the use of the button in the toolbar.
If Save As... is chosen, as well as in the first use of Save command, the following Database Pick
Saving window will be opened:
In the Location field on the right, select the level of the database that the pick is to be saved at. The
Objects field on the left displays the existing picks that are located at this level.
The new pick name can be specified in the Object name line or the existing name can be overwritten if
it is selected in the Objects field.
When overwriting the existing pick, the Append flag becomes available. It is intended to be used in the
frame mode exclusively – if it is on, the pick nodes of the current frame will be appended to the nodes
of the previous frames. For all other cases, this option should be off.
Before saving the pick, the Pick headers... button can be clicked to select the two header fields that will
be used to tie the pick to the trace (see Pick Headers).
Saving into the Header
In order to save the picks in the headers of the flow traces, as well as in the headers of the dataset
predefined in the project, select the following context menu command: Save to header... (which is also
available via the Tools / Pick / menu).
In the dialogue box that opens, select the header field to save the pick values to from the drop-down list
in the Header field.
Before saving, the Pick headers... button can be pressed and two header fields be selected that will be
used to tie the pick to the traces (see Pick Headers).
The Reflect changes in button is used to additionally save the pick into the header with the same name
within the dataset predefined in the project.
If no dataset is selected, clicking OK will still open the dialogue box for the selection of the dataset for
the additional saving of the pick. If Cancel is clicked in this window, the pick will be saved only in flow
trace header.
By clicking OK, the time value of the node expressed in ms will be recorded into the selected headers
of the traces that host the pick nodes. The header values on other traces will not change. In this case, if
the header has not contained any values, the value 9,999 – the number absence sign – will be written to
the traces that host no nodes.
In Screen Display you can work with horizons in trace headers the same way as you do with picks in
data base. When you load a pick from a trace header, it will stay connected with the header in the flow
so that you can save it back there, without specifying a dataset. Moreover, the Picks/polygons settings
options of Autoload and Autosave pick between frames will affect the header picks as well, so that the
pick will be automatically loaded from its header as well saved back when you switch between frames.
IMPORTANT! Note, that the pick edited and then saved to a header in the flow will NOT be
automatically reflected in any dataset. So, if you plan to edit picks from headers in Screen Display,
don’t forget to use either Header<->Dataset Transfer or Trace Output after the Screen Display to save
your changes!
A pick loaded from a header shows its header name and can be saved back to the header in the flow:
Switch these options on to to easily operate with the picks between frames:
Select the Export pick command in the pick context menu. This will open the standard text file save
dialogue. Select the file name and click OK.
The text file format is shown in the following example:
CDP:OFFSET
The first line contains the colon-separated names of the header fields that are used to tie the pick to the
traces. The following lines include the lines that each correspond to a single pick node: first, the colon-
separated pick header values; then, the pick node values, i.e. the time expressed in milliseconds.
The picks can be loaded from the database, from the flow trace header, or imported from a text file.
Press the button in the toolbar or the Insert shortcut key on the keyboard or select the Load
pick command from the pick context menu or from the Tools / Pick / menu. Then the pick selection
dialogue box will appear:
In the Location field on the right, select the level of the database that the pick is saved at. The Objects
field on the left displays the existing picks that are located at this level.
Select the pick in the Objects field, and the Matching line will display the headers that are used to tie
the pick to the traces.
NOTE: If after loading the pick name has appeared in the pick list but the pick itself is not displayed or
is displayed incorrectly, it could happen for either of two reasons: (1) The titles of the picks do not match
the geometry of the existing traces, or (2) the titles of the pick have the floating-point values and cannot
be tied to the traces because of the rounding-off errors – the differences in the last decimal places. In the
first case, it is necessary to either alter the picks or to fix the geometry. In the second case, you can try
to load the picks with interpolation.
When loaded with interpolation, the pick nodes are placed in all the traces within the ranges for each of
the pick headers. At the same time, the position of the new nodes in the traces between the original
pick nodes is calculated by means of the linear interpolation between the existing nodes.
In order to load the pick with the interpolation, select the Load w/ interpolation... pick context menu
command or the equivalent Tools / Pick / menu command. Next, select the pick in the database pick
selection dialogue.
Loading of the Pick from the Flow Trace Headers
Use the Load from header... command in the pick context menu or run it from the Tools / Pick /
menu. In the dialogue box that opens with the list of the available field headers, select the field that
contains the pick. When picks are loaded, the 9999 value in the header will neglected as it is used as a
sign of the absence of number.
Use the Import pick... command in the pick context menu or run it from the Tools / Pick / menu. This
will open the standard text file selection dialogue. The file format is similar to that described in
Exporting to a text file.
NOTE: If after the import, the pick name has appeared in the pick list but the pick itself is not displayed
or is displayed incorrectly, it could happen for either of two reasons: (1) The titles of the picks do not
match the geometry of the existing traces, or (2) the titles of the pick have the floating-point values and
cannot be tied to the traces because of the rounding-off errors – the differences in the last decimal places.
In the first case, it is necessary to either alter the picks or to fix the geometry. In the second case, you
should download the picks into the database through the database manager (see Database Manager),
and then load it from the database with interpolation.
The Screen Display module is used to interactively define the arbitrary areas (QC-polygons) that
correspond to different types of waves in the seismograms. These areas can be stored in the project
database and used to calculate the quality attributes in the Ensemble QC Compute module.
The area (polygon) of the attribute calculation is set as a closed polygon of an arbitrary shape. At the
same time, the polygon nodes are tied to the traces by offsets (the OFFSET values) and the time. Thus,
it suffices to set a polygon in a single CSP or CDP seismogram; it will be simultaneously displayed in
all the seismograms at the appropriate offsets.
To enter the QC polygon edit mode, select the following menu command:
Tools / QC polygons / Edit polygons. This will open the additional window with the list of the
polygons. At the same time, if no polygons have been created before, a new polygon will be created
automatically and its name will appear in the list:
The List Window Commands
- + and – buttons are used to create a new polygon and to remove the current one, which is selected
in the list.
- In order to select an active polygon, click on its name in the list with the left mouse button.
- Left-click on the coloured square to the left of the name of the polygon to open the standard object
colour selection dialogue.
- The square with a check box to the left of the name of the polygon determines its appearance on the
screen.
- Right-clicking on the name of the polygon opens the context menu.
While the list window is open, the polygons can be edited on the screen. In order to exit the edit mode,
close the list of the polygons. The current polygons will remain on the screen. In order to continue
handling the attribute calculation areas, use the Tools / QC polygons / Edit polygons menu command.
Editing a Polygon
Once created, a new polygon can be edited on the screen with the use of the mouse and the keyboard.
- In order to add a node to the polygon, left-click on the screen. If the polygon already has a few
nodes, the new node will be added to the edge that is the closest to the cursor.
- In order to move the node, drag and drop it to the new location with the right mouse button.
- In order to remove a single node, double-click it with the right mouse button.
- In order to delete the entire polygon, select it with the mouse in the list and press the Delete key
- In order to add a new range, press the Q key on the keyboard or click on the button in the
window with the list.
Saving and Loading of Polygons
In order to save the polygon in the project database, right-click on its name in the list and select Save
as... from the context menu (which can be run from the Tools / QC polygons / menu; in this case, the
current polygon will be saved) and specify the position and name of the polygon in the database object
saving dialogue box.
In order to load the previously saved polygon from the database, right-click the blank white field in the
list and use the Load polygon context menu command (which can be run from the Tools / QC
polygons / menu as well).
Text Hints
The module is used to create text strings – hints – on the screen workspace. In order to create a new hint,
select the Tools / Text hint... menu command. When this command is selected, the following window
appears:
The Text string field contains the hint text; the Font button allows you to select the font name, style
and size. The Slope field sets the slope of the text string in degrees (counter-clockwise).
In order to move the created hint on the screen, drag and drop to the new location by clicking the right
mouse button (MB2).
In order to delete the hint, double-click it with the right mouse button (MB2).
In order to edit the hint (content type, slope), click it with the left mouse button (MB1).
In order to select a single trace or an ensemble of traces, place the cursor over the top horizontal axis
in the position that corresponding to one of the borders of the selected data polygon. Then click and hold
the left mouse button (MB1) and move the cursor to the next border. Release the mouse button – the
selected polygon will be highlighted in the inverted colour.
Amplitude corrections and Statics can be interactively entered into the selected ensembles.
In order to deselect, press the left mouse button (MB1) above the upper horizontal axis.
The module contains the option for the manual data gain control. To do this, the traces must be selected
in an ensemble. Then, each time that the «+» or the «-» keyboard key is pressed, the amplitude of the
selected traces will be correspondingly multiplied or divided by 1.4. This gain / attenuation only affects
the display of the data on the screen. The manual gain coefficients are stored along with the flow and
will be restored each time that the flow is launched.
Amplitude editing: this command is used to perform various manipulations with the interactively
amended amplitudes:
• Cancel amplitude editing – the cancellation of all the actions performed manually with the
amplitudes.
• Invert traces – the polarity inversion for the highlighted ensemble of traces.
• Load from header – the multiplication of the amplitudes of the traces by the coefficients stored
in the headers. The title is selected in the standard window that appears when the command is
run.
• Save to header – the saving of the current manually entered coefficients of multiplication of the
traces amplitudes on the screen, in the headers. The title is selected in the standard window that
appears when the command is run.
Interactive Application of the Processing Procedures
Some of the most common processing procedures can be applied to the data directly in the interactive
Screen Display module. When these procedures are applied, the changes will be saved in the flow data.
That is, after the module window is closed, the flow will contain the already processed data.
On the Tools / Apply procedure menu command, the submenu is opened with the list of possible
processing procedures that can be interactively applied to the data. The help on specific procedures in
this submenu is provided in the descriptions of the corresponding processing modules.
In addition, the submenu contains undo commands for the interactive processing:
• Undo one step back – this command cancels the last procedure application;
• Reverse to non-proceeded data – this command returns to the original data that
existed prior to the application of the interactive processing procedures.
The module is used to interactively select the static shifts for the trace ensembles or individual traces on
the screen. Then, various actions can be performed with them (input, saving in the database, saving in
the headers, saving in a text file, import of the corrections file etc.).
The selected ensemble can be moved up / down with the use of the arrow keys on the keyboard or with
the left mouse button (MB1). In order to do this, place the cursor on the selected fragment by clicking
and holding the left mouse button (MB1) and move the mouse up or down. At the same time, only the
image on the screen will move; the methods for the saving of the selected statics and to enter them in
the data are provided below.
NOTE: When using the keyboard, the selected fragment will move to the value equal to the discrete
interval; at the same time, the mouse can be used to enter the statics below the discrete value.
Following the interactive selection of the static shifts, all further manipulations are implemented via
the Tools / Static Correction / menu.
The statics to be saved in the database are essentially picks. They also have the headers that are used to
tie them to the traces (of which there may be more than 2); their processing is similar in many ways.
• Simultaneous shift in panels – if this option is enabled, all the static shifts are carried out
simultaneously for all panels when several panels are used. Otherwise, the static shifts are
implemented only for the traces of the current panel.
• Clear current editing– this command is used to reset either all the interactively entered static
shifts or only some of them (the shifts that are introduced in a specific ensemble of traces). If it
is necessary to reset only the part of the statics, select the ensemble that needs to be reset and run
this command.
• Invert current statics – this command is used to invert either all the entered values of the static
shifts or only a specific part of the values. In order to apply this command to only the part of the
data, select the data for this command to be applied to.
• Load... – this command is used to apply the statics that were previously stored in the database.
When this command is run, the standard dialogue box appears; the right corrections must be
made in this window; at the same time, the Matching field displays the information about the
headers that are used to tie the selected statics to the traces.
• Save as... – this command is used to save the statics to the database. When this command is
activated, the Save statics window that is similar to the Save pick window is opened. All the
actions are similar to the actions that are performed during the saving of the pick. The Matching
field... contains the to-be-selected tie-in headers that correspond to the type of statics. For
example, select the RECNO header if the statics are used to amend the collection point.
• Export... – this command is used to save the statics to a text file outside of the database. When
this command is activated, at first the Reflect matching fields window appears, in which the
required tie-in headers are selected; then, the path and name of the object in the database are
specified in the standard dialogue box
• Convert to pick...: this command is used to convert the current static shifts to the picks on the
screen. When this command is selected, the following window appears:
Here, the time in milliseconds that will be added to the statics values to obtain the pick times can be
specified.
Save to header... – this command is used to save the entered static shifts in the headers. When this
command is selected, the following window appears:
The Header field contains the to-be-selected headers that will host the offset values;
Click Reflect changes in... to select the dataset, the header of which will host the statics. Generally, the
dataset used to handle the shifts should be selected; however, it is possible to select the statics in one
dataset and to save them in another one.
The Reflect matching fields... button is used to select the headers that will be used to tie the shift values.
If a certain piece of data is selected, the Save all and Save selection fields will become active, which
will make the saving of either all statics (Save all) or only the statics of the selected area (Save selection)
to the headers possible. By default, the Save selection field is selected.
• Load from header... – this command is used to apply statics from the headers. When this
command is selected, the following window appears:
In the Header field, the header must be selected the values of which will be used as the statics. This
command has the following modes:
• Replace current – this option replaces the current statics value with the statics value from the
header.
• Replace current when overlap – this option replaces the current value with all statics values
from the header that are not equal to zero. The current statics values remain unchanged where
the statics values of the header are equal to zero.
• Add to current – this option adds the statics values from the header to the value of the current
statics.
• Keep current when overlap – this option applies the statics from the header only where the
current statics are equal to zero. The current shift values remain unchanged unless they are equal
to zero.
Convert from pick...: this command is used to convert the current pick to the statics. When this option
is activated, the Time shift window (see Convert to pick...) appears, where the time must be specified
in milliseconds. The statics value at the point will be equal to the difference of the values of the horizon
time at this point and the number entered in the Time shift window.
• Difference between picks – this command is used to calculate the statics as the difference in
the values of the two picks. This command is active if the window contains only two different
picks.
• Apply statics...– This command is used to apply a static correction from the headers to the
data which will not be considered the current static. When this command is selected, the
following window appears:
Select the required header from the list opened by the Add... button in the Add statics field. The values
that are contained therein will be applied to the data as a static correction. The Subtract statics field is
used when other statics values that are stored in the headers need to be subtracted from the statics values
that are selected in the Add statics field. The Delete option is used to delete the selected header from
the Add statics or Subtract statics field.
In order to save the working window in a bitmap image file, use the Tools / Save image... menu
command. This will open the image parameter settings window.
The Image size fields set the image size expressed in mm; the Resolution fields, the image resolution
in dots per inch. The Format field specifies the image saving file format.
Note: The resolution information is not saved in the Windows BMP format. When the correct image size
is important, the use of the TIFF format is suggested. In addition, the compressed TIFF files with the
LZW compression are usually much smaller than the BMP files.
Exit from the Module
In order to exit the module, use the Exit menu command. When working in the frame mode, this
command will close the current frame and display the following one.
At the same time, the Exit / Stop flow menu command is also available in the frame mode. Once run, it
sends the command to stop and not to pass to the next frame to the processing flow. Notice that during
the interactive operation of the Screen Display module, the program loads the next frame in the
background in order to speed up the processing. Therefore, when the Exit / Stop flow command is
selected, another pair of frames can be displayed on the screen.
When the flow is interrupted by the Exit / Stop flow command, the following message will be displayed:
This message is used to inform that the processing flow has received the break command. This routine
behaviour of the software is not an error.
Plotting*
This module outputs processing results to any printing device compatible with Windows operating
system. The module allows changing data visualization parameters (sorting, display method, scaling,
amplification, pick and header plot visualization, line width, font size etc.), adding labels and logos to
the image, and working with all standard print setup functions (including image preview before printing).
Plotting is a so-called standalone module, i.e. a module that does not require other modules in the flow.
Plotting parameters
The Dataset field allows the user to select a dataset that shall serve as a source for generation of the
image to be printed. When the button is clicked, a dialog box will appear prompting the user to
select a dataset from the project database.
The Add… button in the Sort Fields field opens a standard dialog box containing a header field
selection list. The user should select appropriate input data sorting keys from this list. The selected keys
will appear in the Sort Fields list. Keys can be moved up and down in the list using the arrow buttons
to the right of the list. Keys can also be deleted from the list by pressing the Delete button.
The Selection field allows specifying the input range for each key. Different key ranges are separated
with a colon.
EXAMPLE:
Let us assume that two sorting keys are selected in the Sort Fields field. In this case, the range string in
the Selection field may look like the following:
*:* – all data sorted in accordance with the two selected keys will be input.
*:1000-2000(5) – data for the second key will be input in the 1000 to 2000 range at increments of 5.
From t= – start time in ms. Time sections will be displayed on the preview screen and printed starting
from this time.
to – end time in ms. Time sections will be displayed on the preview screen and printed down to this
time. To display all samples down to the end of the trace, enter “0” in this field.
Display mode
• WT/VA – displays the traces using the wiggle trace/variable area method;
• WT – displays the traces using the wiggle trace method;
• VA – displays the traces using the variable area method;
• Gray – displays the traces using the variable density method in the gray-scale palette;
• R/B – displays the traces using the variable density method in the red-white-blue palette;
• Custom – displays the traces using the variable density method in a custom palette,
• Define – this button is active if the Custom option is enabled. Pressing the button opens the
Custom Palette window.
The palette is specified as a set of points with assigned colors. Colors are interpolated linearly between
the specified points. The user can create, move and delete points and change their assigned colors. Points
are shown as white rectangles on the gray strip under the palette image.
• To move a point, select and drag it with the left mouse button (МВ1). The palette will change
its appearance as you move the point.
• To change the color assigned to an existing point, double-click it with the left mouse button
(МВ1). A standard point color selection dialog box will appear.
• To create a new point with a specific color assigned to it, Shift-click the appropriate spot on
the palette using the left mouse button (Shift+MB1). A standard point color selection dialog
box will appear. A new point with the selected color will be added in the location you clicked
on.
• To remove a point from the palette, click on it using the right mouse button (МВ2).
• Load palette... Pressing this button opens a standard file opening dialog box to select an RGB
ASCII text file from the disk;
• Save palette ... Pressing this button opens a standard file saving dialog box to save the current
palette to the disk as an RGB ASCII text file.
• Variable spacing. Select this option to arrange the traces on the preview screen/plot in
accordance with the specified header values. For example, this option allows arranging
stacked section traces on the screen by their world coordinate along the profile. To select a
header, press the field button.
• Ensemble boundaries. Enable this option to have the Plotting separate different trace
ensembles on the preview screen/plot with gaps. The first sorting key specified in the stream
data input module is the ensemble key.
• Ensemble's gap. Gap width (specified as the number of traces) between trace ensembles. The
field is active if the Ensemble boundaries option is enabled.
• Use excursion ___ traces. This option is used to limit the maximum trace display deviation
when using the WT, VA or WT/VA method. If this option is enabled, amplitudes exceeding
the specified maximum deviation will be clipped on the screen when the trace is displayed.
• Additional scalar. (Display gain). An additional factor to multiply trace samples by before
being displayed on the screen/plot.
• Bias. Trace mean level shift relative to the zero. If traces are displayed using the variable area
method, changing this parameter leads to a change in the black level. Positive values will
result in a shift to the left from the trace zero line and an increase in the blackened area of the
curve. Negative values will reduce the blackened area of the curve. If data is displayed using
any of the variable density methods (Gray, R/B or Custom), this value will shift the zero
relative to the palette center.
• Line width (mm) – width of lines displayed on the screen. This option can be used when
traces are displayed using the WT/VA or WT method.
Scales
Label:
• Left – label margin from the left edge of the sheet in mm;
• Right – label margin from the right edge of the sheet in mm;
• Top – label margin from the top edge of the sheet in mm.
Fields– label text fields are as following:
• Company name;
• Project Title;
• Project Location;
• Comments – for any additional information.
Label logo
• The BMP file (image in *.bmp format) field allows the user to choose a logo to be added
to the label. Clicking the button will open a file selection dialog box.
• Logo Height – logo height in mm;
• Logo Width – logo width in mm;
• Constrain proportions – preserve/do not preserve initial proportions.
• Logo Position
▪ Left – in the left part of the label
▪ Right – in the right part of the label.
T Axis…(time axis parameters)
Show axis – show/do not show axis titles and scale ticks. Enabling Show axis allows the user to
specify major and minor tick display parameters and their value marks as well as axis titles.
Major ticks:
• Number per primary – number of minor ticks per one major tick;
• Tick length (mm);
• Tick line width (mm);
• Show values – show/do not show scale values;
• Show grid lines – show/do not show grid lines;
• Scale font – clicking this button opens the major tick value mark font setup dialog box.
Title:
This group consists of three identical fields used to specify the parameters of horizontal scale titles –
trace labels. Several trace labels can be used in this case, with each label set up independently.
Show axis – show/do not show axis titles and scale ticks:
• Linear axis – linear axes, displays the values of the specified header field.
• Field button – to select a header field whose values will be used as a trace label.
• Time axis – displays shooting time stamps (hour, minute and second).
▪ If header fields containing hours, minutes and seconds are completed in the project, they
can be selected in the appropriate Hour, Minute, and Second fields and used as axis
titles.
• It is possible to specify the trace title step, i.e. interval between trace labels appearing on the
horizontal scale (Step). Possible options:
▪ Different shows a title for the first trace and every subsequent trace with header value
different from the previous trace (does not work for the Time axis);
Title font – click this button to open the axis title font dialog box.
This option allows setting up the pick display parameters. A list of displayed picks is shown on the left
side of the setup dialog box. Picks may be added to the list by the Add horizon…button or deleted from
the list by the Remove horizon(s) button. To select multiple picks to be removed, Shift-click on them
using the left mouse button (MB1).
Individual horizon parameters
Plot headers…
Pressing this button opens the Header plot parameters dialog box that allows specifying the preview
screen/plot display parameters for selected header field value plots:
General parameters – this is a group of general parameters common for all curves.
• Fill background – this option is used to display plots against a solid background. Click on the
colored square to the right to select the background color.
• Scale font… – clicking this button opens the plot tick font setup dialog box.
Curve parameters – this section is used to set up individual display parameters for each header field
in the list. Left-click on one of the selected header fields in the list to view and change its display
parameters.
• Time scale…– enable this option to have header fields interpreted as time values and displayed
in accordance with the existing time scale. This option, for example, can be used to print static
shift values in the real scale.
• Color – specify the display color for the selected header field here. Click on the colored square
to select a new color.
• The fields below are used to set up the plot display scale. They are active only if the Time
scale option is enabled.
• Plot area position (%)– specify header plot downward shift as percentage of the image size. If
the value of this field is set to “0”, the plot will be displayed immediately above the time
section, if to “100” - at the bottom of the section.
• Plot area width (mm) – Height of the area (in mm) in which the selected header field value
plot will be displayed.
• Line width (mm) – line width in mm.
• Whole range. When this option is enabled, the entire range of header values is scaled to fit it
completely into the specified display area. Otherwise the displayed value range should be set
manually by specifying its minimum (Min scale value) and maximum (Max scale value)
values.
• Show scale. This option allows displaying an axis with a step scale for the selected plot. If it is
enabled, the axis position can be specified in the Scale Position field as percentage of the
screen width (0 – left edge, 100 – right edge).
• Value marks orientation allows specifying the value mark position relative to the scale axis
▪ to the left (Left)
▪ to the right (Right).
Headers to plot – this option is used to select the headers to be displayed as plots.
▪ Add. Use this button to select the header fields to be displayed. A list of headers available
in the current project will open. You can select several headers from the list by left-
clicking while pressing the Shift and Ctrl keys. After selecting the headers, press the OK
button. The selected headers will appear in the list.
▪ Remove–this button removes the selected header fields from the displayed plot list. You
can select several headers in the list by left-clicking while holding down the Shift and Ctrl
keys. After setting up plot display parameters for one of the header fields in the list, left-
click another one. The parameters specified for the previous field will be saved. After
setting up all necessary parameters, press the OK button. Selected header field values will
be displayed as plots in accordance with the specified parameters.
This window allows the user to preview the image in general, check how many sheet the image will
be printed on, and zoom in or out by pressing the buttons or selecting the View/Zoom In,
View/Zoom Out from the menu. To close the window, press the button or select File/Exit from
the menu.
To print the image, set all necessary parameters in the Plotting parameters dialog box and click OK.
Then use the Run command of the flow editor to execute the flow with the module.
3D Volume Viewer*
This module enables 3D display of seismic data. This is a standalone module that does not require any
additional modules in the workflow. Datasets from the RadExpro database are input into the module.
The module can simultaneously process 2D data profiles and 3D data cubes.
Data requirements
Profile group – 2D profiles. For the profile to load properly, the data set must contain headers with the
X and Y coordinates and a header with the profile index. The latter is a number (profile ID) that
uniquely identifies the profile. If the numbers match, data from different datasets will be considered
parts of the same profile.
Cubes – 3D cubes.
1. The ILINE_NO and XLINE_NO headers must be completed for each trace.
2. The traces must be sorted either by the ILINE_NO:XLINE_NO headers or by the
XLINE_NO:ILINE_NO headers. You can use the Resort* module for quick data sorting.
3. The headers containing the X and Y coordinates of the traces must be present.
The module can process both regular and irregular grids.
Module parameters
When the module is added to the workflow, a dialog box with two tabs appears:
Profile group
Cubes
Parameters
• Add… – adds a 2D profile/ 3D data cube from the database for viewing. The selected dataset is
displayed in the Datasets pane.
• Delete – deletes the selected dataset from the list.
• Delete cache files – deletes the cache. Cache files are created on the disk to speed up the dataset
display process. Pressing the Delete cache files button deletes these files.
• X Coordinate, Y Coordinate – allows selecting the headers that contain the X and Y
coordinates, respectively.
Profile group
• Index – header containing the unique profile number. Datasets with identical indexes are
considered parts of the same profile.
When the module is launched, the Dataset bounds dialog box appears. Here you can set up the color
palette relative to the extreme amplitude values in the dataset.
The dialog box shows the amplitude distribution histogram. The horizontal axis representing the
amplitude values is split into segments called “bins”.
The number of bins is specified in the Bins field. A rectangle is displayed for each segment. The height
of each rectangle is directly proportional to the number of data samples with the amplitudes falling within
the respective segment.
• Lower bound – value truncation boundary. If the number of samples within the bin is smaller
than the percentage specified in the Lower bound field, the samples will be combined.
• Bins – number of segments into which the data range will be divided.
• Refine – determines the exact maximum and minimum amplitude values in the dataset.
After all preparatory procedures are completed, the main 3D Volume Viewer window opens. It is divided
into several areas:
• Toolbar
• Data display pane
• Objects pane
• Video editor
It is also possible to color the horizons depending on the selected header values.
Display pane
The selected data are rendered in the right pane of the window. For 3D cubes, three slices are displayed
simultaneously: an Inline slice, an xline slice, and a time slice. To view the adjacent slices, press the
button. Hover the mouse cursor over the slice, press and hold the left mouse button, and move the
cursor to the area of interest within the data cube.
You can rotate the cube, change the time scale, and zoom in and out of the image (see Controls).
Toolbar
– global data display properties. When this button is pressed, the following dialog box appears:
Appearance
• Background color – sets the background color. Pressing this button opens the standard color
selection palette.
• Color for cells with no data – color for “gaps”. If there are any gaps in the data, the color
selected here will be applied to them.
▪ Same as background – fill the cells with the background color.
▪ Transparent – make the cells transparent.
▪ Custom – select a custom color. Click the custom text line to open the palette.
• Axes – position of the axes. Possible options:
▪ Top left
▪ Top right
▪ Bottom left
▪ Bottom right
• Model lighting – highlighting the model for better visual distinction between the slices.
▪ Tool bar – show the toolbar.
▪ Status bar – show the status bar.
– save the image. Pressing this button opens the Printing options dialog box.
– gain. Each step increases the gain by 10% of the current value.
This panel displays the list of all loaded datasets. Here you can configure the display options for the
specific dataset, add or delete picks, change the currently displayed slice, and delete the selected
dataset.
If you right-click the dataset name, a drop-down menu containing the following commands will open:
• Palette – palette setup. Select one of the standard palettes from the drop-down list, or create a
custom palette by selecting the last option – custom.
• Define – dialog box used to set up the palette if the custom option was selected.
• Gain – amplitude gain values.
• Bias – zero shift relative to the palette center.
• Bounding box – bounding box. Line color – box line color.
▪ Line color…
• Grid – this section is used to set up the grid parameters:
▪ Line color – line color.
▪ Mark color – mark color.
▪ Mark font – mark font parameters.
Load horizon – allows loading a pick
If a pick was loaded, it will be displayed the objects panel. Right-click the pick to open the context
menu:
• Wire – pick points will be connected with straight lines forming a “wireframe”.
• Surface – pick points will be connected using the triangulation method, forming a surface
consisting of triangles.
• Refine Surface – intermediate points will be added to refine the surface and make it smoother.
A more accurate surface required more time and resources to render.
• Undo last surface refinement – undo the last pick surface refinement operation.
The transparency of the surface created by the pick is set separately in the transparency field.
Record – record a video of the animation. Press the … button to select the file saving options (location
and format).
Controls
<LMB>:
• in the slice selection mode ( ): if the cursor is above a slice, the slice will be selected;
dragging the mouse while pressing down the <LMB> moves the slice; otherwise the model
is rotated;
• in the zoom-in mode : dragging the mouse while pressing down the <LMB> changes
the zoom box size; zooming itself is done with the released <LMB>;
• if no mode is selected, the model is rotated;
• if the <Ctrl> key is pressed, the model is moved regardless of the current mode.
<RMB> – dragging the mouse while holding down this button moves the model regardless of the current
mode.
<F1> – switches to/from the slice selection mode
<F2> – switches to/from the zoom-in mode
<1> – displays the slice with the inline coordinate smaller than the current slice’s coordinate by 1
<2> – displays the slice with the inline coordinate larger than the current slice’s coordinate by 1
<3> – displays the slice with the crossline coordinate smaller than the current slice’s coordinate by 1
<4> – displays the slice with the crossline coordinate larger than the current slice’s coordinate by 1
<5> – displays the slice with the time coordinate smaller than the current slice’s coordinate by 1
<6> – displays the slice with the time coordinate larger than the current slice’s coordinate by 1
<Ctrl>+<Q> – displays the appearance setup dialog box
<W> and <Up> – moves the “camera” up (the model moves down)
<A> and <Left> – moves the “camera” left
<S> and <Down> – moves the “camera” down
<D> and <Right> – moves the “camera” right
<Z> – zooms in to the model
<X> – zooms out of the model
<Ctrl>+<Up> – expands the model along the time axis
<Ctrl>+<Down> – compresses the model along the time axis
3D Volume Viewer*
This stand-alone module is dedicated for displaying seismic 3D data volumes. 3D volume should be
stored in RadExPro database as a separate dataset. Input data requirements:
The module can work with 2D data profiles as well as with 3D cubes.
Profile group - 2D profiles. To load a profile correctly, the dataset must have X and Y headers and a
header with the profile index. Index - number reflecting a unique profile number (id). If the numbers
match, data from different datasets will be considered parts of the same profile.
Cubes – 3D cubes:
1) ILINE_NO and XLINE_NO must contain correct inline and crossline position correspondingly.
2) Input dataset sorting should be: ILINE_NO:XLINE_NO or XLINE_NO:ILINE_NO. Resort*
module can be used for quick dataset resorting before using 3D visualization.
3) CDP_X and CDP_Y headers should contain X and Y trace coordinates.
Parameters
Profile group
Cubes
Add... – add 2D profile/ 3D data cube from database for viewing. The selected data set is displayed in
the Datasets window.
X Coordinate, Y Coordinate - set the headers containing the X and Y coordinates accordingly.
Delete cache files - deletes the cache. For a faster display of datasets, cache files are created on the drive
and deleted by clicking Delete cache files.
• Index - a header that contains a unique number of the profile. Data sets with the same index are
considered a part of the same profile.
you can use the arrows to change the order of profiles loading.
When launching the module, the Dataset bounds window opens. Here you can adjust the color palette
relative to the amplitude in the dataset.
The dialog box displays the histogram of amplitude distribution. The horizontal axis is the value of
amplitudes, the axis is divided into segments - bins. The number of bins is specified in the Bins field.
For each segment a straight line is drawn. The height of each rectangle is proportional to the number of
counts in the data with amplitudes falling in the corresponding segment.
Lower bound - the limit of rejecting values. If the number of samples falling into the bin as a percentage
is less than the percentage specified in the Lower bound field, they are combined.
Refine - determine the exact minimum and maximum amplitude values in the dataset.
After all necessary preparatory procedures are completed, the main window of the 3D Volume Viewer
opens. It is divided into several areas:
• Toolbar
• Display window
• Object Panel
• Video Editor
Display window
In the right area of the window the selected data are drawn. In case of 3D cube one Inline and one
xline is displayed simultaneously. In order to view the next slides, click on the button. Move the mouse
cursor over the slide, press the left mouse button and move the cursor to the cube area you are
interested in.
You can rotate the cube, change its time scale, zoom out and zoom in.
- activates slices movement mode. Select slice by left mouse button and hold it to move selected
slice. If mouse cursor is out the cube, left mouse button can be used for cube rotation. Use right mouse
button for drag the cube.
-zoom mode. Select desired cube region to zoom in. To switch off selected mode switch off the
button manually.
– global data display properties. The following dialog opens as you press it:
Background color is the background color. Clicking on this button opens a standard palette for
selecting a color.
Color for cells with no data - color of "gaps". If there are data gaps, they will be painted over with a
certain color:
• Top left
• Top right
• Bottom left
• Bottom right
It is also possible to color the horizons according to the values of the selected header.
- image saving. The Printing options dialog appears, as you click on it.
– change gain. At each step, the value increases by 10% of the current value.
Objects
This panel displays a list of all downloaded datasets. Here you can configure the display of a particular
dataset, add or delete picks, change the current displayed slide, delete the selected dataset.
Right-clicking on the name of a dataset opens a drop-down menu. It contains the following commands:
Bias - shift.
Bounding box - limiting parallelepiped. Line color is the color of its lines.
If the pick has been loaded, it appears in the object panel. The right mouse button on the pick displays
a menu:
The module is dedicated to 3D display of the seismic srossections along the lines. The crossections are
displayed in their real 3D coordinates.
Each line at the module input must have a unique line ID value stored in one of the trace header field
(by default, SFPIND). This field must be indicated as the first sorting key in the Trace Input module
when entering the traces from several datasets into the flow using one single instance of the Trace
Input (in order to ensure that the traces from one line goes together).
Module parameters
Palette. You can select colour Palette that will be used for data display. The following options are
available:
Normalization – allows selecting amplitude normalization method. Available options include: NONE
– no normalization applied, ENTIRE – all traces are normalized altogether, INDIVIDUAL – each trace
is normalized separately.
• Axis – this button calles the dialogue with axes parameters. You can indicate intervals for
primary and secondary ticks separately for the time and coordinates. It is also possible to adjust
the font.
• Profile ID header – trace header field with the unique line ID value.
• X Coordinate Header - trace header field with X-coordinate.
• Y Coordinate Header - trace header field with Y-coordinate.
After the module is launched you will see its main working window (Fig.1) containing 3D image of
the loaded data and coordinate axes. Horizontal exes display coordinates in meters, vertical axis – two-
way time in nanoseconds.
menu
Pick Bar
Tool Bar
Besides, 2 special bars are available: the Tool Bar (by default is visible) and the Pick Bar (by default is
invisible). You can switch the bars on and off through the View menu.
Controlling 3D scene
Tool Bar
The Tool Bar is designed to control the position of the 3D scene relative to the user, as well as to switch
on and off some additional working modes. The purpose of the command buttons is described below:
Fig.2
• Buttons (2),(4),(6),(8) allow rotating the scene relative to two mutually-perpendicular axes,
button (5) brings the scene back into default position.
• Buttons (10),(11),(13),(15) allow moving the scene left, right, up and downd, button (12) brings
the scene back into default position.
• Buttons (14) and (16) allows zooming and unzooming the scene.
• Button (1) switches on/off the Cursor-mode (see details about this mode in “Working in
Cursormode” section).
• Button (3) switches on/off Slice-mode. In this mode, the amplitude slices of all the profiles are
displayed (Fig. 3). The width of slice display can be adjusted through the menu (Options/Slice
Properties…). Vertical position of the slices can be controlled by “Up” and “Down” arrow-
keys, or (when Cursor-mode is on) by mouse.
• Button (7) switches on/off displaying of survey map above the profiles. (A map must be
previously loaded by Options/Map…, see corresponding section n of this manual).
• Button (9) switches on/off continuous rotation mode. In this mode, rotating the csene by mose
would cause ints continuous rotation in the specified direction.
Fig. 3. Slice-mode.
Controlling the scene with the mouse and keyboard in the normal mode
In 3D Gazer you can work with the mouse in two modes: normal-mode and cursor-mode.
In normal mode you can use the mouse to rotate the scene, move it, zoom up and zoom out.
For rotating the scene, click left mouse button at any point of the screen and while holding the button
move the mouse cursor. The scene will be rotated in the specified direction.
For moving the scene relative to the observer move the mouse cursor holding its right buton pressed.
For zooming/unzooming the scene hold left mouse button and move the mouse cursor up/down
keeping Ctrl key pressed.
The keyboard can be used to change the vertical scale of the scene – use Ctrl+”Up” and
In the Curor mode a 3D cursor is displayed on the screen. It can be used to determine coordinates of
any point on the screen (Fig. 5). X and Y coordinates as well as two-way time (t) in nanoseconds are
displayed along the coordinate axes and also at the status bar.
Cursor mode.
In Cursor-mode the mouse controls only the 3D cursor position. You can control the scene position
through the Tool Bar.
When left mouse button is pressed, 3D cursor will follow the mouse cursor in XY-plane. The
vertical cursor position (along time axis) does not change at that.
When the left mouse is pressed together with the Shift key, moving the mouse cursor will change the
vertical position of the 3D cursor. At that, the position of the 3D cursor in XY-plane will not change.
Picks
Pick Bar
You can work with picks through the Pick Bar To display the Pick Bar on the screen, use View/Pick
Bar… menu command.
On the pick bar the new button creates a new pick, delete button deletes the current pick. With the
corresponding buttons you can also save current pick into a text file and load some previously saved
pick on the screen.
IMPORTANT: Since the 3D Gazer picks correspond to linear objects rather than to horizons, their file
format is different from the RadExPro horizon picks. XY coordinates of pick nodes and the node time
values are stored together with the line style and colour information You can switch from one pick to
another with previos and next buttons.
The colored rectangular field to the right shows the color of the current (active) pick. Click on this field
to select a new coulour for the pick in a standard color-select dialog.
In the Pick group of fields you can specify the name of the current pick and its Size.
Set button updates parameters of the active pick in accordance with the current pick bar settings.
Option of automatic cursor positioning to current pick points (Cursor to current pоint), allows in the
Cursor-mode to quickly move 3D cursor from one pick point to another. For that, switch the option on,
use the tool bar to switch on the Cursor-mode, and then use Ctrl+”Right” and Ctrl+”Left” key
combinations. The 3D cursor will jump sequentially from one current pick point to another.
For editing picks on the screen you need to switch to the Cursor-mode.
To add a point to the current pick, position the 3D cursor to the desired location of the 3D scene, place
the mouse cursor to the same location and click the left mouse button holding Ctrl key pressed. A point
can be added either to the end or before the beginning of the pick. To make this, before adding a point,
“illuminate” either the first or the last point in the pick (a pick point is “illuminated” when a 3D cursor
is placed on the corresponding position in the 3D scene, see fig. 7). A point can also be added between
two existing pick points. To make this, position the 3D cursor to the connecting line, add a point on the
line and then move it to the desired location.
To move a point of the current pick, position the 3D cursor at the desired point (when the cursor is
properly positioned at a pick point, the point becomes “illuminated”, see fig.7), place the mouse cursor
to the same location of the screen, then catch the point with the right mouse button and drag the point to
the new position (to move the point along the time-axis, keep Shif key pressed).
To delete a point of the active pick, position the 3D cursor at the desired point (when the cursor is
properly positioned at a pick point, the point becomes “illuminated”, see fig.7), place the mouse cursor
to the same location of the screen, then double-click the point with the right-mouse button.
Fig. 7. Editing pick on the screen: when 3D cursor is positioned at a pick point, the point is
“illuminated”.
TIP: When moving/deleting an existing point of the active pick it is convenient to use the option of
automatic cursor positioning to current pick points (Cursor to current pоint) of the Pick Bar.
TIP: When adding/editing pick points it is often convenient to switch on slice-mode, as in this mode
positioning of 3D cursor appears to be more simple and obvious.
Adjustment of data display parameters can be made through the Display Properties dialog window that
is called by Options/Display Properties menu command (Fig. 8).
Fig. 8. Data display parameter dialog
You can select color Palette that will be used for data display. Available options are as following:
• R/B – red-white-blue
• Grey – grey-scale
• Custom – user-defined color palette. When user-defined palette is selected, the Custom becomes
enabled. The button is used for selecting/editing custom color palettes. When the button is
pressed, the palette dialog will appear, similar to that used in the Screen Display module. The
dialog allows editing the palette, saving it to hard disk (Save button) and loading previously
saved palettes from disk (Load button).
Normalization drop-down list allows setting a mode of amplitude normalization prior to data display
(ENTIRE – one and the same normalizing factor is calculated for all traces, INDIVIDUAL – separate
normalizing factor is calculated for every individual trace, NONE – no additional normalization is
performed).
Transparency slider allows making some portion of low-amplitudes transparent, i.e. to display only the
strong amplitude parts of the sections. (Fig. 9). The current transparency value (the percentage of
absolute amplitude values that become transparent) is reflected above the slider in the Value field.
Fig. 9. Data 3D display with transparency – only the high-amplitude portions of the sections are
visible.
Axes adjustment
The axes parameters can be adjusted through the dialog window that is called by Options/Axis menu
command (Fig. 10).
The Time group allows adjusting vertical (two-way time) axis. The Coordinates group controls X and
Y horizontal coordinate axis. For both groups, you can specify Primary and Secondary tick intervals
(for time axis – in nanoseconds, for coordinate axes – in meters). To the right of every field you can
indicate if the particular type of ticks is to be displayed on the axes.
Use Font button to display a standard dialog for adjusting font to be used for axes tick marks.
The 3D Gazer allows displaying a map above the seismic sections. A map here is an arbitrary BMP
image, which corners are tied to coordinates (see Fig. 4). It could be a survey map with the lines, a
chart of a building where the survey was performed, area map with topography etc. The map can be
loaded through a dialog window that is called by Options/Map… menu command (Fig.11).
Use Browse button to select a BMP image-file with the map you want to display.
IMPORTANT: An image must be in Window BMP format in 24-bit RGB color mode. In case your
map is in a different format, use any third-party image editing software to convert it to the appropriate
format.
When the file is selected, its name will be displayed in BMP file string.
The transparency of the map can be calculated by two methods. If Use proximity to color option is on,
transparency is calculated automatically for every color of the map through the proximity of this color
to the specified opaque color. Use Set button to select a color that is to be opaque. The other colors of
the map will be as transparent as they differ from the specified color.
When Use proximity to color option if off, use Set button to select the color that is to be transparent.
All other colors of the map, in this case, will be considered to be solid. Then you can separately adjust
the actual transparency of the transparent color and solid colors by means of Solid Percent and
Transparency Percent values. These fields can take on values from 0 (fully transparent) to 255 (fully
opaque).
In order to properly tie the map to the data, in Coordinates group of fields specify coordinates of Top
Left and Bottom Right corners of the map (in meters). The coordinates can also be loaded from a text
file: the 2nd line must contain X and Y coordinates of the top left corner, the 4th line – those of the bottom
right corner (see Fig. 12).
Fig. 12. An example of text file with map coordinate opened in a standard Windows Notepad
application.
After the map is loaded, use “Map” button of the Tool Bar to make it visible on the screen.
Interactive QC*
Interactive QC module is an interactive environment for on-land 3D/2D seismic data quality control.
The module enables simultaneous work with several source point, receiver point, CDP attribute maps,
location map and a seismogram selected on a Source map. All maps and the seismogram display window
in the module are fully-synchronized with each other. Additionally, it is possible to view the survey
statistics: total number of shots, number of bad shots (under the selected criterion), and coverage area.
The module can be used for analysis of an already existing, earlier recorded dataset or (only in RadExPro
Real-Time configuration) for analysis of a dataset appended in real time while new data is being
acquired.
NOTE! For correct module operation, the following headers should be filled in: R_LINE, REC_SLOC,
S_LINE, SOU_SLOC, FFID, CDP, SOU_X, SOU_Y, REC_X, REC_Y, CDP_X, CDP_Y. In case of
2D data it is recommended that you copy receiver point numbers into the REC_SLOC header, and fill
RLINE header with a constant, e.g. you can assign 1 to it.
Module parameters
In the Input dataset edit field, specify a dataset for analysis. (You can choose an existing dataset or, if
working in the real-time, specify a new one. In the latter case, after you run the flow the module will be
on standby until this dataset is created.)
NOTE! Selected dataset traces must be sorted by FFID. If there is no sorting, the module will start, but
the data will be displayed incorrectly.
SPS – (optionally) SPS files with design coordinates of sources (SPS-S) and receivers (SPS-R) (pre-
plots).
Grid – (optionally) binning grid saved earlier from the 3D CDP binning interactive tool or from the
dialogue of the 3D CDP Binning module. The grid is used only for statistics computation – the CMP
area and nowhere else.
In the Visual scheme bar you can specify a path to a module scheme – the is a a DB object with a set of
the module main display parameters. In this scheme, sizes and positions of all windows as well as data
display parameters are saved. The sheme is autosaved when the main window is closed and then uploads
when the window is opened.
It is a set of maps color coded according to selected attributes from header fields: you can create several
Source maps, Receiver maps, Location maps (on Location maps it is possible to display any
combinations of source points, receiver points, and CDPs, both collectively and individually). When the
main window is opened, the following three default maps are displayed: Source map, Receiver map, and
Location map.
Attributes for displaying on maps (except for the fold) should be computed in advance and stored in
dataset header fields.
Additional maps can be added to the main window through the Windows menu commands:
Moreover, you can clone any of the opened maps – just right click on the map working area and choose
Clone map command in the context menu.
An exact copy of the current map will open, where you can change parameters.
By default, all maps of one type (for example, all Receiver maps) are fully-synchronized with each other
(they have common scale and position of scroll bars). If needed, synchronization of any map may be
switched off in the map settings window (Settings button). Maps of different types have a synchronized
cursor (for example, if you move the mouse cursor over the Source map, on all other maps the current
cursor position will be indicated with a cross).
A left click on a source point opens a corresponding seismogram in a separate window. In this case, the
template is illuminated on all Receiver maps. While moving the cursor over the seismogram window,
source point, receiver point and CMP corresponding to the current trace under the cursor are illuminated
(points are illuminated on all maps where they are displayed):
During real-time operation, the module checks the analyzed dataset every minute, and if there are any
new data, it re-draws all the maps. Moreover, you can always additionally open a special window to
display the last seismogram acquired in real time – Real-Time Seismic window. This window can be
shown or hidden with the Windows/Real-time seismic menu item (during ordinary operation the last
seismogram from the analyzed dataset will be shown).
• Windows/Tile – placing of internal windows on a grid thus filling the main window. The grid
consists of 2 rows. If the number of windows is even, all windows will be of the same size. If the
number of windows is uneven, one of the windows will be 2 times bigger than the rest.
• Data/Refresh – when working in the real time mode, read new ensembles (if any) before it will
happen automatically.
When working with several monitors, you can press “Extend window to all monitors” button
located in the upper right corner of the main working window.
Maps
Three types of maps are used in the module: Source maps, Receiver maps and generan Location maps,
which can display source points, receiver points and CMPs. Example of a receiver map color coded
according to receiver elevation attribute is shown below:
Map windows consist of a map itself, framed with coordinate scales, color palette (shown only when
mapped points are color coded by an attribute), tool bar with buttons and a status bar.
Points on the map are shown in accordance with their coordinates: for source point SOU_X/SOU_Y
headers are used, for receiver points – REC_X/REC_Y headers, and for CMPs – CDP_X/CDP_Y
headers.
Settings
Start zoom
Zoom out
Fit to window
Set axis ratio 1:1
Save image
Besides, right mouse button click on the map area opens context menu with the following commands:
• Export to text file –values of all headers related to this map will be exported (e.g. in case of
export of a source map colored according to SNR attribute, FFID, SOU_X, SOU_Y and SNR
headers will be saved into the file)
• Clone map – cloning of a map (a copy of the current map will be created with all parameters
set)
Zooming in of a map is performed using a zooming frame. Click the Zoom button on the tool bar
and using left mouse button select a rectangular area of the map you want to fit to window. To unzoom,
use Unzoom button. Besides, in order to zoom/unzoom elements of a map you can scroll the mouse
wheel when mouse cursor is inside the map area. “Fit to window” button fits the entire map to
window.
Settings
Click the Settings button on the toolbar to open Setting dialog window. Apart from that, the dialogue
window can be opened by a double click on scales. In that case, a corresponding section of the dialogue
window will be opened.
Window settings are grouped in sections. The Settings dialogue window is divided into 2 parts: on the
top, there is a list of sections; parameters of the selected section are shown at the bottom part. Ticks to
the left of section names control visibility of corresponding elements of the map or activity of options.
For different types of maps settings are a bit different. They depend on elements the map consists of.
The most complete list of settings for a location map will be shown below. Maps of other types will not
have some of these settings.
These sections include settings for display of source points, CMPs and receiver points (Sources, CDP
points, Receivers) correspondingly – settings for all of these sections are similar.
Attribute Header – header containing attribute values that will define the color of every point according
to the color palette. Default value is <NONE> – attribute not selected – in this case, all points will be of
the same color according to Point color value.
Except for the headers, Attribute Header drop-down list for CDP allows to choose a special value
<CDP FOLD>. In the same way, there is a special value for receiver point – <RECEIVER FOLD>.
Point radius is a radius around a point on the screen. If the mouse cursor is inside this circle, it is
considered that we got into a point.
Point symbol – you can select between a circle, a square, a triangle and an asterisk.
Palette view – show/hide color palette in the right part of the window. This and further options are
available only if the attribute header is chosen. Note that for location maps the palette settings are
common for all three sets of points. If you change the palette for Sources, it will change for CDPs and
for Receivers, it is so for all types of points.
Palette – shows current color palette. Click on the image to change the palette.
In order to set the palette manually, select Custom from the drop-down menu and then click the Define
button:
To change the palette, change the position of the sliders. You can save your custom palette using the
Save... button.
Palette left, right – values of attribute corresponding to left and right ends of the color palette. If both
values are equal to zero, maximum and minimum of all attribute values are used. Left end of the palette
corresponds to minimum value, and the right end corresponds to maximum one.
Palette mapping – method of interaction between a color palette and certain attribute.
• Simple - in this case a linear dependence is set between the attribute's value and the palette (the
left edge of the palette is the minimum value of the attribute, the right edge of the palette is the
maximum value of the attribute).
• Advanced – you can indicate an exact way how your attribute values should refer to the color
pallet. Simply set some reference points on the palette and indicate corresponding value of each
poit. As a result, your value-to-palette mapping will be made a sectionally linear function, linked
to a position on the palette at reference points (0% - leftmost side, 100% - rightmost side), and
linearly interpolated in between.
Azimuth Section
When a seismogram display window is open and the mouse cursor is over the seismogram, the source
point, receiver point, and CMP of a current trace under the cursor are highlighted on the location map.
"Azimuth" is the line which connects these 3 points. The parameters for its display are configured here.
You can change the line color (Color), transparency (Transparency), line width in millimeters (Line
width).
Sources preplot, Receivers preplot Sections
If there is a preliminary survey plan available with the information on the designed locations of the
sources and receivers (the so-called "preplots"), then it can be downloaded from the SPS files that are
selected in the module parameters dialog (see Fig. Module parameters). If these files are loaded, the
designed location of the sources and receivers will be displayed on the map in gray color. In this
section, you can configure the display parameters for the preplots:
• Symbol size – is the size of the displayed point symbol;
• Point symbol – select from a circle, a square, a triangle and an asterisk;
• Point transparency –from 0 – completely oblique to 100% – transparent to invisibility.
On any of the maps you can load a bitmap image as a background. These can be images obtained by
aerial / satellite photographic survey or simply a scanned map of the study area. In this section, you can
configure the parameters of the background image.
• File – is a file from which the image will be read, all major bitmap formats are supported;
• Transparency – 0 – oblique, 100 – completely transparent);
• Left, Right, Top, Bottom – coordinates of the image corners on the map (left, right, top and
bottom coordinates, respectively).
Sections for scales settings: Left scale, Bottom scale, Right scale, Top scale
The configurable parameters for all scales are the same and described below. Note that Right and Left
scales, as well as Top and Bottom ones, are synchronized in pairs, so changing the parameter of the top
scale will result in a similar change in the same parameter of the bottom scale and vice versa.
Here you can manually specify the exact position and size of the window on the screen: the coordinates
of the upper left corner of the window (Window left, Window top), its width (Window width) and
height (Window height). The values are specified, depending on the Window units field, either in
pixels or in millimeters.
Synchronized option
By default, all maps of the same type (for example all source point maps) are synchronized with each
other. For example, if you change the display scale on one Source Map, the scale of all other source
point maps will change in the same way. The following actions are synchronized: changing of the
window size, changing of the scale, position of scroll bars. If necessary, a particular map can be disabled
from synchronization by removing the check from the box Synchronized.
Exclusive areas
In Interactive QС module you can show polygons that were added in Exclusive areas on maps:
In order to do so, on Exclusive areas tab of map settings window mark with a tick a polygon that needs
to be shown.
You can also adjust visualization parameters in visualization settings on Exclusive areas tab:
Traces are displayed in color in accordance with the palette specified in the window settings.
You can navigate between gathers using the Gathers list that opens or simply by clicking on the arrows
on the toolbar:
The window consists of a seismogram framed by a vertical and top horizontal scales, a color bar, the
tool bar with the buttons at the top and the status bar at the bottom.
When you move the mouse through the seismic display, a trace and a sample under cursor are
highlighted. The same way, the header value displayed on the horizontal scale for the current trace under
cursor is highlighted. The current trace number, sample number, amplitude and time are displayed in the
status bar at the bottom of the window.
Settings
Start zoom
Zoom out
Fit to window
Increase gain
- Decrease gain
- View spectrum
- Save image
Zooming in of the seismogram is performed using a zooming frame. Click the Zoom button on the
tool bar and using left mouse button select a rectangular area of the seismogram you want to fit to
window. To unzoom, use Unzoom button. "Fit to window" button fits the whole seismogram
to the window.
Settings
Click the Settings button on the toolbar to open Setting dialog window. Besides, the dialogue window
can be opened by a double click on scales or palette bar. In this case, a corresponding section of the
dialogue window will be opened.
Window settings are grouped in sections. The Settings dialogue window is divided into 2 parts: on the
top, there is a list of sections; parameters of the selected section are shown at the bottom part. Ticks to
the left of section names control visibility of corresponding elements of the map or activity of options.
Seismic display Section
Here, the parameters for seismic display are configured: Max visible traces – by default 0 means
showing all traces in the seismic gather, amplitude Normalization can be Per trace, Global (one per
whole gather displayed), or None, color Palette and Palette mapping which may be either Through
gain and bias, or by specifying exact numerical values of the amplitudes at the bounds – Exact bounds.
Depending on the selected way of palette mapping you can specify either Gain and Bias, or amplitude
values corresponding to the left and right edges of the palette color (Left palette entry maps to, Right
palette entry maps to). It is possible to adjust palette screen visibility and location: to the right or to the
left of a seismogram (Palette view side).
First breaks plot section
First Break Plot option when on draws a theoretical direct-wave travel-time curve on top of the seismic
display. The travel times are calculated basing on loaded geometry and a constant velocity specified.
This often helps to visually identify geometry problems and thus works as an additional geometry QC
tool. In order to set up the direct-wave travel-time curve, choose a source to receiver Offset header,
specify constant Velocity and set parameters of line display (Color and Line width),
Time scale section
Synchronized time scales are displayed on the right and on the left of the seismogram. Here you can
configure their appearance: change range of values (From and To; zeros in these fields mean displaying
the full range); set a step of major ticks (Major step) as well as minor ticks (Minor step); disable minor
and/or major ticks (Major ticks visible, Minor ticks visible); set up font parameters for both major and
minor ticks separately (Major font, Minor font); select scale Color; hide one or both scales (Axis on
left, axis on right). You can call scale settings directly from the seismic display by double clicking the
scale you need to change.
Header scale Section
Horizontal scale of header values is located on top of the traces. Here you can select one or more headers
to be displayed on the scale. There are several ways to distribute the traces marks on the horizontal scale:
• Visible marks/Different to mark a trace only if the header value is different from that of the
previous trace;
• Visible marks/ Interval (in traces) to mark every N-th trace;
• Visible marks/Multiple of some value put a mark when when the header value of a trace is
multiple of the specified value;
• Always mark first trace, Always mark last trace – enforce header values of the first/last traces
to be displayed.
You can also set the number of Digits after decimal point, style and color of the scale Title font and
signs find Signs font.
Window geometry Section
Here you can manually specify the exact position and size of the window on the screen: the coordinates
of the upper left corner of the window (Window left, Window top), its width (Window width) and
height (Window height). The values are specified, depending on the Window units field, either in
pixels or in millimeters.
The parameters of this section are discussed in section Working with spectra.
To see the amplitude spectrum of a portion of the data click the button on the tool bar and use the
mouse to select a rectangular window of data to calculate the amplitude spectrum. A separate window
with the calculated spectrum will appear. Select the File / Add spectrum menu item of the spectrum
window to plot several spectra in one window.
The spectrum window is into regions: the spectrum plots on the left, the Status bar at the bottom of the
window, the Tool bar at the top, the Legend of the dialog box (a list of open spectra) on the right. You
can switch these elements on and off in the View menu. You can also enable a Graphic Legend, which
will be displayed directly in the window with spectra plots:
Using the toolbar on the top ( ) or through the Scale type menu you can select the type of
spectrum scale: Raw amplitudes, Percent Amplitudes, and dB Amplitudes in relation to a specofoed
(adjustable) amplitude.
Set the spectrum display parameters by selecting Parameters/All parameters ..., In addition, you can
set the default spectrum display parameters using the parent seismogram window settings (button
on the tool bar), in the Spectrum defaults section.
Here you can specify a Reference amplitude for dB scale to be calculated. The default value is 0
meaning that for each specrum curve the dBs will be calculated with the referecne to its maximum.
However, specifying an explicit value may be useful when comparing spectra from different windows.
You can change the Line width and Fill area under the curve with curve color.
Save the picture with the amplitude spectrum plots through the Spectra window menu File/Save image
You can configure saved image settings in several ways: through the dialog of image parameters, which
is called from the spectra menu Parameters/Image parameters or Parameters/All
parameters/Image. In addition, the parameters for saving the spectrum image by default can be adjusted
in the seismogram parent window settings (Spectrum defaults / Image).
Here you can select a folder to save the files (Target folder), its name prefix (File prefix), File
extension which would define the output format, image Height and Width, Vertical resolution and
Horizontal resolution.
You can save the resulting amplitude spectrum plot as a text file via the Spectra window menu File /
Save current spectrum ... or by clicking the button on the tool bar .
If the Graphic Legend is displayed in the Spectrum window, it will also be saved when saving the image.
You can set the parameters of the Graphic Legend through the menu Parameters/Legend parameters
... or Parameters/All parameters.../Legend.... In addition, the default parameters of the legend can be
set in the parameters of the seismogram parent window (Spectrum defaults/Legend)
Here you can change the position of the Graphic Legend in the image (Relative X position, Relative Y
position), mark it (Title, Title font, Labels font), frame it (Frame visible, Frame color, Frame width),
make a background (Background visible, Background color), specify the round corners in percentage
terms (Round corners).
Select View / Graphic legend from the menu to make the legend visible in the spectrum area. In order
to open the Graphic Legend settings, you can also double-click on it with the left mouse button.
Frequencies, as well as points on the current plot corresponding to this frequency, are highlighted in
color. The status bar displays the frequency, raw amplitude and amplitude values in the current units.
To configure the graphical axes, double click on them with the left mouse button. The axes settings
window will appear:
Here you can select the Start value and the End value of the parameter.
You can change the color of the plots and their names by double clicking on the legend in the right part
of the window. Get in Parameters/Windows title…, or Parameters/All parameters…/Windows
title… to change the resulting image header.
You can change the scale along the vertical and horizontal axes separately, by moving the mouse cursor
in the spectrum window itself. Double-click the left mouse button in the screen area to return the diagram
to the initial scale.
On a right-click, a pop-up menu is available for the list of elements on the right part of the window: you
can Rename the spectrum, set Change color ..., Cut it to the clipboard, Paste it from the clipboard into
the same or another Spectra window, and Delete it from the window.
Statistics window
To open or hide the survey statistics window, use Windows/Show statistics command in the main
window menu. The following parameters of the survey are shown:
Shot count – total number of shots in a data set
Bad shot count – number of bad shots (in accordance with a criterion chosen in window parameters).
The following 2 parameters are shown only when a binning grid is selected in the module parameters
dialog:
Settings
Click the Settings button on the toolbar to open Setting dialog window. Window settings are
grouped in sections. The Settings dialogue window is divided into 2 parts: on the top, there is a list of
sections; parameters of the selected section are shown at the bottom part.
Header field – header field which containing values that will be used for shot rejection. The header has
to be filled in prior to working with the module. It can contain values of any quality attribute (for
example, signal-to-noise ratio) or represent a sort of combined data quality coefficient calculated on the
basis of several attributes in the Trace Header Math module. The value of this header is implied to be
equal in all traces of the same common shot gather. An example of a possible calculation formula of
such a coefficient is shown below, but you can use any of your own formulas:
In this example, QC_COEF adopts 0, 0.9 or 1 value depending on central frequency value and signal-
to-noise ratio. If the frequency QC_F<15 Hz or signal-to-noise ratio QC_SNR<5, the result is 0; if the
frequency is more than 15 Hz and QC_SNR is within the range from 5 to 10, the result is 0.9; if QC_F
≥ 15 Hz and QC_SNR ≥ 10, the result is 1. You can also use a more complicated and flexible integral
criteria, for instance, evaluated as a weighted sum of individual QC attributes of the record, such as
SNR, central frequency, bandwidth. etc.
For example, if the rejection criterion parameters are set as shown below, a shot will be considered a
bad one, if QC_COEF header value adopts value that is less than 0.9:
Parameters for calculating CMP coverage. There is only one parameter herein:
Threshold fold –CMP bins with the resulting fold lower than the value specified herein will not be taken
into account when calculating the coverage area. By default, this value is 0, which means that the
program will consider a bin filled and take it into account when calculating the coverage area even when
it contains only one trace.
Window geometry Section
Here you can manually specify the exact position and size of the window on the screen: the coordinates
of the upper left corner of the window (Window left, Window top), its width (Window width) and
height (Window height). The values are specified, depending on the Window units field, either in
pixels or in millimeters.
Crossplots
Also you can create cross-splots that represent the location of points in the space of two arbitrary
headings. To create a crossplot click on Add Crossplot... in the Window menu:
The dialog window will appear, and you should specify the headers, crossplot:
To change the visualization, open the parameter window by clicking the icon on the Toolbar. To color
the crossplot points according to the desired attribute (third header), set the Attribute header in the
crossplot parameters. For example, you can draw a color map of the RMS amplitude in FFID - CHAN
space:
It is also convenient to analyze the links between different quality attributes by crossplots:
Ensemble Header Statistics
The module is designed to analyze the values of the headers in ensembles of traces going through the
flow. The results of the analysis are output in the header specified by the user.
Operating principle:
The module successively accepts traces ensembles at the input and analyzes values of the header
specified by the user within this ensemble.
Module parameters
Input/output headers:
• Number of values: the module will calculate the number of traces in the ensemble with the
header value higher/lower than one specified in the Threshold parameter
▪ Greater: the number of traces with the header value greater than Threshold.
▪ Less: the number of traces with the header value less than Threshold.
• Number of consecutive values: the module will calculate the number of successive traces with
the header value higher/lower than one specified in the Threshold parameter.
▪ Greater: the number of traces with the header value greater than Threshold.
▪ Less: the number of traces with the header value less than Threshold.
• Maximum: the module will find the maximum header value in the ensemble.
• Minimum: the module will find the minimal header value in the ensemble.
• Average: the module will average the value of the specified header within one ensemble.
• Median: the module will take the median value of the specified header within one ensemble.
• Alpha trimmed mean: the smallest and the greatest values of the samples percentage are
excluded and then the average value is defined. When selecting the Alpha trimmed mean it is
required to specify the percentage of excluding (Alpha).
Point In Map Polygon module
The module is designed to check belonging of traces going through the flow to the indicated map
polygons in the survey area.
The algorithm checks 2 headers specified in module settings for each trace going through the flow. If
the point determined by headers gets into at least one polygon, value 1 is assigned to the trace header
specified by the user. If the trace doesn't get into any of the polygons, value 0 is assigned to the header.
Module parameters
• X coordinate header, Y coordinate header – a pair of headers containing coordinates that will
be checked by the module for trace belonging to this or that map polygon. For example, pairs
SOU_X:SOU_Y, REC_X:REC_Y and CDP_X:CDP_Y can be such headers.
• Point in polygon mark header – indicate a header to which values 1 or 0 will be assigned
depending on whether the trace got into set map polygons.
• Add Polygons – add a map polygon from database.
• Remove rows… – delete highlighted map polygon from the table.
Signal Processing
Amplitude Correction
This module allows for different types of amplitude correction of data. When accessing this module
the following window appears:
Parameters
The options available in this window are described below:
• Time raised to power – multiplies each sample value by its time, raised to a power.
• Exponential correction – multiplies every sample by the exponential function of ekt. This allows
approximate compensation for intrinsic attenuation in the media. Neither attenuation coefficient
variation with depth, or frequency dependency, are taken into account. When activating this
option, specify the constant in dB/s to enable the amplitude correction calculation.
Normalization option –time raised to power and/or Exponential correction modes can be used with
Normalization option. It allows do not change amplitudes at a certain time by multiplying amplitudes
on normalization coefficient. Constant can be chosen from following options:
• None – no normalization will be applied
• Max application time – additionally, one can limit maximum application time of the gain
function. It can be useful, when you do not want to extra gain lower part of seismic section. After
specified time, gain function will be set to constant, equal gain function value of the previous
sample.
Automatic Gain Control– allows automatic changing of gain, applied to trace samples, as a function
of amplitudes in AGC sliding window. For each trace, the gain scalar is calculated for every position of
a window of given length, sliding down the trace with the step of one sample. Then, the scalar is applied
to the specified sample within the window, i.e. the amplitude of the sample is divided by the scalar. As
a result, amplitude variations along the trace reduce. When activating this option, specify:
• Operator length - the length (in ms) of the sliding window that will be used for the gain scalar
calculation.
• Type of AGC scalar - Mean is used for AGC scalar calculation as mean of absolute values of
amplitudes in the sliding window. RMS (root-mean-square) is used for AGC scalar calculation
as root-mean-square of the amplitudes within the window. To select a type of AGC scalar, click
the left mouse button (MB1) in the corresponding field.
• Basis for scalar application - CENTERED applies the scalar to the central sample at every
sliding window position TRAILING applies the scalar to the last sample in every sliding
window. LEADING applies the scalar to the leading sample in every sliding window. To chose
the position of scalar application, click the left mouse button (MB1) in a corresponding field.
Trace equalization –calculates an individual gain scalar for each trace. The scalar is calculated over
the indicated fixed time window. Then, all amplitudes of the trace are divided by this scalar. As a
result, the amplitude variation between the traces are reduced.When activating this option, specify:
• Basis for scaling. Defines the amplitude to be used as a gain scalar: Mean, RMS, Maximum
(i.e. mean, root-mean-square, or maximum amplitude within the specified window select the
basis for scaling, click the left mouse button (MB1) in a corresponding field.
• Time gate start time. Defines the start time scalar calculation window.
• Time gate end time. Defines the end time scalar calculation window.
Time variant scaling defines the law of amplitude scaling along the trace. When the option is
activated, specify time-scaling pairs as a text string. The syntaxes of the string is as following:
time1:gain1,time2-time3: gain2,...,timeN:gain
Normalization theory
g(t) – gain function, which is applied to the amplitude values.
if t0>0 and p>0 (t0 – normalization time, p – user-defined positive power). Or:
𝒈(𝒕) = 𝒕𝒑 ,
𝟏, 𝒕𝟎 < 𝟎
𝒈(𝒕) = { 𝟏, 𝒕𝟎 = 𝟎
𝒆𝒌(𝒕−𝒕𝟎) , 𝒕𝟎 > 𝟎
𝐠(𝐭), 𝐭 ≤ 𝐓𝐦𝐚𝐱
𝐠(𝐭) = {
𝐠(𝐓𝐦𝐚𝐱 ), 𝐭 > 𝐓𝐦𝐚𝐱
Spherical divergence correction module applies a time-variant gain function to traces. Every sample of
each trace is multiplied by the gain function, which compensates for the amplitudes decay due to
wavefront spherical divergence. The gain function is defined by two parameters: velocity function and
normalization. When you access the module, the following window appears:
Parameters
Parameters group Velocity:
Define velocity function – When you click on the button, a velocity model selection box opens:
User can define a velocity function manually (Single velocity function), read velocity function from a
text file (Use file) or a databasevelocity pick (Database - picks). The 2D velocity function is interpolated
and matched to every trace according to its CDP header.
A 3D velocity function will be interpolated by triangulation. To interpolate data by coordinates, user
should select Use coordinates based interpolation. Then, velocity function is placed in correspondence
with a trace using CDP_X and CDP_Y headers. If there are no coordinates, the module will fail.
If Use coordinates based interpolation is not selected, the module will work using pseudo-coordinates:
it assigns ILINE and XLINE calculated based on CDP values to a velocity model and matches velocity
function to traces using ILINE_NO и XLINE_NO headers.
Velocity domain defines in which area this velocity field is set: Time or Depth .
Velocity type defines type of set velocity: RMS or Interval
When the amplitude is corrected for the spherical divergence, the amplitude of each trace is multiplied
by the gain function g(t). This function may appear as follows:
𝒈(𝒕) = 𝒕 ∙ 𝒗(𝒕)𝟐
t0 parameter may be a Constant time (default is 0 – trace center), set by a a pick (Horizon), or taken
from a Header.
𝟏, 𝒕𝟎 ≤ 𝟎
𝒈(𝒕) = { 𝒕 ∙ 𝒗(𝒕)𝟐
,𝒕 > 𝟎
𝒕𝟎 ∙ 𝒗(𝒕𝟎 )𝟐 𝟎
If equation 𝑡0 ∙ 𝑣(𝑡0 )2 is negative, infinitely small or equal to zero, gain function is calculated by
formula:
𝒈(𝒕) = 𝒕 ∙ 𝒗(𝒕)𝟐
DC Removal
This module removes DC component of each trace if there is any. When accessing this module the
following window appears:
Parameters
Start Time and End Time define the time range where the DC level will be evaluated.
Save DC check box, when checked, allows saving the DC value evaluated for each trace into a trace
header field selected in the drop-down list to the right of the check box.
Bandpass Filtering
This module applies frequency filtering to every input trace. The filtering algorithm operates in
frequency domain and is realized by means of multiplying trace Fourier transform on digital filter
frequency characteristic.
When this module is activated the window containing two fields: for filter type selection and for filter
parameters selection will appear. The set of filter parameters depends on the type of filter selected.
Parameters
The Field for filter selection contains:
• Simple Bandpass filter - simple trapeziform bandpass filter. Here, specify four values of
frequency in Hz in filter parameters selection field.
• Ormsby Bandpass filter. Here, specify four values of frequency in Hz in filter parameters
selection field.
• Batterworth filter. Here, specify two values of frequency limits values in Hz and two slope
values in dB/Oct.
• Notch filter - notch filter is trapeziform. Here, also specify four values of frequencies in Hz.
• Ricker wavelet filter – a bandpass filter represented as a Ricker wavelet. Here, specify the
frequency value in Hz corresponding to the main maximum in the spectrum.
Tapering – this parameter specifies the window width on the borders of the indicated time window, in
which the amplitudes are measured linearly (or according to the Gauss’s Law, in case the Gauss Taper
is selected) from 0 to 100%. Tapering is defined in percentage of the time window width. Tapering gives
an opportunity to avoid border effects on the borders of the windows
To improve computation speed parallelization option can be turned on by specifying Number of
Threads. This is number of CPU cores to be used in computations. Each computer setup and module
requires preliminary tests on small volume to define best number of CPU’s to achieve best computation
speed. One can start tests with number of physical cores minus 1.
Choose one type of filter. In the filter parameters selection field for Simple bandpass filter or for
Ormsby bandpass filter select a set of four frequencies in the corresponding parameters' fields. These
frequencies consistently define the 0% and 100% points of signal gating from the side of lower
frequencies and 100% and 0% points of signal gating from the side of higher frequencies (expressed in
Hz). The filter slopes are developed within a frequency domain by linear weight function (for simple
bandpass filter) or by Hanning function (cosine weight function) for Ormsby bandpass filter.
Example:
If you set such parameters it will result in creation of a band pass filter with a band pass from 20 to 50
Hz and with low frequency slope 30 Hz wide.
If you set such parameters it will result in creation of a bandpass filter with a band pass from 20 to 50
Hz and with 10 dB/Oct slope steepness for lower frequencies and 40 dB/Oct slope steepness for upper
frequencies.
For notch filter the frequencies consistently define 100% and 0% points of signal gating from the side
of lower frequencies and 0% and 100% points of signal gating from the side of higher frequencies.
Example:
If you set such parameters it will result in creation of a bandpass filter with a suppression band from 45
to 55 Hz and 5 Hz wide slope for lower frequencies and 5 Hz wide slope for upper frequencies.
Butterworth Filtering
This module applies Butterworth frequency filtering to each input trace. The filtering algorithm
operates in the frequency domain and is implemented by multiplying a Fourier transform of the trace
by the digital filter frequency response.
Theory
The Butterworth filter is designed in such a way as make its amplitude frequency response as smooth
as possible for frequencies within the pass band, but have it drop almost to zero for frequencies within
the stop band range.
The amplitude frequency response G(w) of an nth order Butterworth low-pass filter can be expressed by
the following formula:
1
𝐺 2 (𝑤) =
𝑤 2𝑛
1 + (𝑤 )
𝑐
where
• n – filter order (for a first-order filter, the response rolls off at −6 dB per octave, for a second-
order filter, the response decreases at −12 dB per octave, a third-order at −18 dB, and so on.)
• wc – cut-off frequency (frequency on which the amplitude is equal to -3 dB)
The amplitude-frequency response of an nth order Butterworth high-pass filter is calculated using the
same formula, but with “w” replaced by “1/w”.
Module parameters
When the module is activated, a dialog box with filter parameters will appear:
The dialog box allows specifying the filter operator type, two frequency limit values in Hz, and two filter
slope ratio values in dB/oct.
Low-cut filter:
• Low-cut frequency – the -3dB cutoff frequency;
• Low-cut slope in dB per octave;
High-cut filter:
• High-cut frequency – the -3dB cutoff frequency;
• High-cut slope in dB per octave.
Multi-threading computations – to improve computation speed parallelization option can be turned on
by specifying Number of Threads. This is number of CPU cores to be used in computations. Each
computer setup and module requires preliminary tests on small volume to define best number of CPU’s
to achieve best computation speed. One can start tests with number of physical cores minus 1.
Resample
This module allows for changing of sample rate of the data. It allows both increasing and decreasing of
the sample rate. New sample rate can be a broken number. Usually the data are resampled to rarer
sampling interval in order to increase processing speed and reduce space on disk required for data
storing.
Parameters
In the New sample rate field, specify new sampling interval value expressed in ms.
Number of threads– the number of parallel processing threads (the quantity of logical processor cores
that are accessible for the operation of the module).
Hilbert Transforms
This module is used to recalculate seismic traces into traces of reflection strength, instantaneous phase
or instantaneous frequency.
For every trace of the dataset through passing the flow its own analytical function (complex trace) is
created, where the real part of the signal is the trace itself and imaginary part is its Hilbert transform.
The module of this function is usually called instantaneous amplitude or "reflection strength". Reflection
strength analysis often allows more accurate tracing of amplitude variations along reflecting boundaries
and the whole section. Since a function phase is independent from module size/value, the instantaneous
phase study can be used while boundary tracing, faults and irregularities educing. Instantaneous
frequency is calculated as phase derivative and can be used while studying the absorbing or/and
scattering properties of the section, since absorption and scattering result in frequency-dependent elastic
wave attenuation.
Parameters
Select one of available types of transform:
• Hilbert Transform
• Reflection Strength
• Instantaneous Phase
• Instantaneous Frequency
Use median filter___samples - in case when instantaneous frequency is selected, you can activate this
option to apply median filter (with the operator of the specified number of samples) in order to
smoothen the result.
Stockwell Transform (S-transform)
The module converts a seismic trace into an ensemble, in which each trace is an S-transformation of the
original trace at the corresponding frequency. The first trace in the ensemble corresponds to zero
frequency, the last one corresponds to Nyquist frequency. The frequency step is calculated by the
𝑓
𝑁𝑎
formula: 𝑑𝑓 = 𝑁/2 , where 𝑓𝑁𝑎 - Nyquist frequency, а N – number of samples. More information about
the Stockwell transform can be found at the end of the module description.
Parameters
Frequency header – specify the header containing the frequency value to which the trace corresponds
in the output ensemble.
Frequency index header – specify the header containing the number of the trace within the output
ensemble. The value will vary in the range [1, 𝑁/2], where N is the number of samples in the traces of
the original dataset.
Theory
The frequency-time transforms are widely used to analyze seismic signals, because they allow you to
account for signal changes over time. There are various methods for analyzing the frequency-time
dependence of a signal - the Gabor transform, the Fourier transform in a short window, the continuous
wavelet transform, and the bilinear frequency-time transform.
The Stockwell transform (S-transform) is the time-frequency transform that has some advantages over
the Gabor and Wigner transforms.
S-transform can be obtained as phase correction of continuous wavelet transform (R.G.Stockwell, 1996)
The continuous wavelet transform of the function ℎ(𝑡) is given by the formula:
∞
𝑊(𝜏, 𝑑) ∫ ℎ(𝑡)𝑤(𝑡 − 𝜏, 𝑑)𝑑𝑡
−∞
The S-transform of function ℎ(𝑡) is defined as a continuous wavelet transform with a given parent
wavelet multiplied by the phase multiplier:
This module allows some integral transformations of every trace of the dataset through-passing the
flow. When this module is activated the following window appears:
Parameters
Here, select the type of transform:
Amplitude spectrum
• Unwrap phase spectrum – 360*n value is added to phase spectrum values when crossing
each 𝑝𝑖/2.
• Shift zero to center of a trace
Hilbert transform
Phase rotation (degrees). When the option is chosen specify the phase rotation in degrees in the
edit field to the right.
Antiderivative - integral of the trace, the value of the integration constant is zero. You can use this
option to convert data acquired with an accelerometer to a form of standard geophone data.
Trace Derivative
The module is designed to calculate the derivative trace
Parameters
In the module parameters, you must specify one of the three numerical methods that will be used when
calculating the derived trace:
Parameters:
• Power exponent – the power exponent.
▪ Constant – given as a real number
▪ Variable – header must be selected in the dropdown list
• Keep sign – if this flag is on, the amplitude of the observation will be assigned the sign of the
initial amplitude after being raised to the power. Otherwise, after being raised to the power, all
the amplitudes will be positive.
• Positive only – only positive amplitudes will be raised to the power. The observations with the
initial negative amplitudes are zeroed.
• Negative only– only negative amplitudes will be raised to the power. The observations with the
initial positive amplitudes are zeroed. (The sign of the result will depend on the value of the
Keep sign flag.)
Kolmogorov Spectral Factorisation
This module implements one of the standard methods for signal conversion to a minimum-phase
signal. The module is useful in the following situations:
1. If you need to obtain the minimum-phase signal, start the module with zero parameters.
2. You can create a bubble pulsation suppression filter using the Zero lags option to zero the
delays in the lag-log domain.
3. You can also obtain the zero-phase signal by applying factorization to the output minimum-
phase signal. The pulse phase spectrum is adjusted by changing the delays in the lag-log
domain.
One of the most commonly used marine seismic sources is an air gun. Air supplied under high pressure
creates a gas bubble in the water which expands due to the difference between the hydrostatic pressure
and the pressure inside the bubble. Because of inertia, the bubble passes the state of equilibrium when
pressures are equalized, and begins to collapse. The bubble expansion and collapse process is repeated
several times while the bubble rises to the surface. This creates a bubble pulsation phenomenon.
Pulsations of the air bubble result in additional energy spikes that act as undesirable sources,
significantly complicating the wave pattern.
The method used is a modified version of Kolmogorov’s spectral factorization method which is one of
the standard techniques for signal conversion to a minimum-phase signal.
The Wavelet extraction module is used to build an effective filter for suppression of bubble
pulsations.
A characteristic feature of the filter is that it has no effect on the first break waveforms of the original
seismic pulse as it affects only the longer time delays corresponding to air bubble oscillations.
Parameters
• Zero lags (ms) – how much should be zeroed in the lag-log domain to ensure that everything
except for the bubble is zeroed.
• Anticasuality (ms) – delay affecting the signal waveform. If this parameter is set to zero, the
signal phase remains unchanged. Modifying this parameter changes the phase spectrum of the
signal. The optimum value with which the original pulse would approach the Ricker wavelet is
determined experimentally.
• Anticasuality taper (%) – width of the tapering window applied to reduce the signal to zero in
the lag-log domain.
• Shift zero to the center of the trace – shift the signal zero to the center of the trace.
Theory
Kolmogorov’s spectral factorization with the specified amplitude spectrum of the signal essentially
consists of finding the minimum-phase equivalent of the signal. A detailed description supported by
proof is provided by J. Claerbout (1992).
To formulate the problem in mathematical terms, let us use a Z-transform in the form of Z = eiωΔt,
where Z is the single delay operator. Z-transform allows describing the time function as polynomial
coefficients at different degrees of Z. Causal functions corresponding to the minimum-phase signals
are defined by coefficients at positive degrees of Z, while anti-causal functions are defined by
polynomials with negative degrees.
With the specified energy spectrum of S(Z), the spectral factorization problem is solved by finding
minimum-phase function B(Z):
B*(1/Z)·B(Z) = S(Z)
Since S(Z) is an energy spectrum and, therefore, is positive by default for all ω, let us take its
logarithm:
U(ω)=ln[S(ω)]
Because U(ω) is a real and even number, its representation in the time domain is also real and even.
Therefore, we can separate the causal part C(Z) and the anti-causal part С(1/Z):
U(Z) = C*(1/Z)+C(Z)
With a known С(Z), the target function can be easily obtained by expressing C(ω) through an
exponential function: B(ω)=eC(ω)
Based on Kolmogorov’s filter described above, we can now build a bubble pulsation suppression filter.
Let us assume that F(ω) is the target filter in the frequency domain. In the above notation, the filter will
take the form of F(ω)=e U(ω). Using a Z-transform, we can reformulate U(ω) as:
U(ω) = ∑Uτ·Zτ
Parameter τ has a time dimension, and represents the delay in the cepstral domain. The Uτ values are
the coefficients of the target filter.
Let us take advantage of the features of exponential function eA+B+C = eAeBeC and split the sum under
the exponent into short, medium and long delays:
𝝉 𝒎𝒂𝒙 𝒄 𝒓 𝝉 𝒎𝒂𝒙
𝒖𝝉 𝒁𝝉
𝒆∑𝝉=𝟏 = 𝒆∑𝟏 𝒆∑𝒄+𝟏 𝒆∑𝒓+𝟏
(wavelet) = (continuity)(Ricker)(bubble)
This expansion into three delay ranges in the cepstral domain allows roughly splitting the original
signal into pulse waveform (А and В) and bubble pulsations (С). By zeroing the coefficients for short
and medium delays and preserving the coefficients for long delays, we will obtain the required filter
for application of corrections for bubble pulsations.
Coefficients for eB in the time domain can be calculated using a Fourier transform. To do this, let us
estimate B(Z) for multiple real ω and perform an inverse Fourier transform to obtain the time domain
coefficients.
Let us assume that r = r(ω), ϕ = ϕ (ω), Zτ = ei ωτ. Let us express the casual filter through an exponential
function.
𝝉
|𝒓|𝒆𝒊𝝓 = 𝒆𝒍𝒏|𝒓|+𝒊𝝓 = 𝒆∑𝝉 𝒃𝝉 𝒁
The resulting filter pair has the properties of causality and reciprocality.
𝝉
|𝒓|𝒆𝒊𝝓 = 𝒆+ ∑𝝉 𝒃𝝉 𝒁
𝝉
|𝒓|−𝟏 𝒆−𝒊𝝓 = 𝒆− ∑𝝉 𝒃𝝉 𝒁
With the known spectrum r(ω), we can build a minimum-phase filter. Since r(ω) is a real and even
function of ω, the same applies to the logarithm. Let us denote the result of inverse Fourier transform
of ln|r(ω)| as eτ, where eτ is an even and real function of time. Let us also denote the odd real function
as oτ.
𝝉
|𝒓|𝒆𝒊𝝓 = 𝒆𝒍𝒏|𝒓|+𝒊𝝓 = 𝒆∑𝝉(𝒆𝝉 +𝒐𝝉)𝒁
Phase 𝜙(𝜔) has been transformed into 𝑜𝜏 . We can ensure causality by selecting such 𝑜𝜏 so that
𝑒𝜏 +𝑜𝜏 = 0 for all negative τ. This way, we can determine the values of 𝑜𝜏 in the negative lag domain.
Since 𝑜𝜏 is an odd function, we also know its values for τ > 0. The result is a causal exponential
function that acts a causal minimum-phase filter for the specified spectrum. Therefore, the causal
minimum-phase filter can be obtained by multiplying eτ by a step function with the height of 2 to
preserve the real part. This transform is called Kolmogorov’s spectral factorization.
Let us start with the previously obtained bτ, and split the function into the even and odd parts:
𝒖𝒐𝒅𝒅
𝝉 = (𝒃𝝉 − 𝒃−𝝉 )⁄𝟐 и 𝒖𝒆𝒗𝒆𝒏
𝝉 = (𝒃𝝉 + 𝒃−𝝉 )⁄𝟐
Now let us transform the even part into a logarithm of the spectrum amplitude. The odd part will be
transformed into a phase spectrum. Let us modify the phases while leaving the amplitude spectrum
unchanged, and multiply 𝑢𝜏𝑜𝑑𝑑 by a weight function so that it would shrink to zero in the domain of
small values of τ. Figure (а) shows even function 𝑢𝜏𝑜𝑑𝑑 and the weight function shrinking to zero (Fig.
b). Parameter τa controls anti-causality in the wavelet.
1. Direct Fourier transform is applied to the traces, the mean amplitude spectrum is calculated, and
then inverse Fourier transform is applied to that spectrum.
2. The signal spectrum ACF is found for each trace, the mean value is calculated, and then inverse
Fourier transform is applied.
Parameters
Extraction Method:
• Amplitude spectrum averaging – the signal is obtained as follows: amplitude spectrums are
calculated for each trace, and their mean value is determined. After that, inverse Fourier
transform is applied to the averaged amplitude spectrum curve.
• ACF spectrum averaging – the zero-phase equivalent of the signal is obtained through
averaging of the autocorrelation function (ACF) spectrum.
This module derives a match filter between two datasets based on the Wiener filter algorithm. The filter
makes phase and amplitude characteristics of one dataset similar to those of another one.
Two datasets are input into the module. The datasets are sorted by the ID Header which is specified in
the module parameters in such a way so that the traces from the first and second dataset would alternate:
IMPORTANT! The input datasets must have the same size for all filter modes except for Sum traces.
The module outputs traces with the applied filter coefficients. The number of output traces depends on
the filter mode and the number of input traces.
Before creating the filter, you need to determine the time shift that will be applied to the first data set. It
is calculated based on the cross-correlation maximum. The maximum shift value is defined in the filter
parameters, and cannot exceed 10% of the filter length. The resulting time shift value is written to the
specified filter header. This approach allows improving the accuracy of the Levinson algorithm used for
calculation of the filter coefficients.
The filter is calculated for the specified trace time window. The filter length cannot exceed the window
size. The zero time of the filter (taking into account the calculated time shift) is written to the Max time
shift field of the filter.
The filter can be calculated in one of 4 modes that are covered in greater detail in the filter parameter
description.
• Time shift header – header where the resulting shift values will be saved.
Filter parameters
• Zero time header – header containing the filter zero time (taking into account the time shift).
• Sum traces – traces from the first and second dataset are summed, correlation functions are
calculated for the resulting two mean traces, a system of linear equations is created, and finally
the filter coefficients are determined from those equations. The filter outputs a single trace.
• Sum correlation functions – the trace window is centered relative to the current trace pair.
Correlation functions are calculated inside each window for each pair of traces from the two
datasets. After that, the correlation functions are summed. A system of linear equations is
created for the mean correlation functions, and the filter coefficients are determined from those
equations. The output consists of a set of filters for each trace pair.
• Sum filters – filter coefficients are calculated inside each window for each pair of traces from
the two datasets. After that, the resulting filters are averaged within the window. The output
consists of a set of filters for each trace pair.
• No sum – correlation functions are calculated for each pair of traces from the two datasets, a
system of linear equations is created, and finally the filter coefficients are determined from
those equations. The output consists of a set of filters for each trace pair
• Trace window – trace window for the Sum correlation functions and Sum filters modes.
Remove AGC
This module is used to restore the true amplitudes after application of AGC.
Parameters
• AGC coefficients dataset – dataset containing the AGC coefficients.
• Matching fields – fields by which the traces will be selected (i.e. the module will search for
traces in the selected dataset based on the value of this field for each trace).
Time Variant bandpass filtering
This module performs window-based bandpass filtering. The trace is divided into time interval, and a
bandpass is specified for each segment. The filter type is set once for all windows.
Parameters
Use Windows – this box is checked by default. If you uncheck it, the module will function identically
to the Bandpass filtering module:
• From pick/Manual time borders (ms)/From header - determines how the boundary will be
set – from the pick, manually or from header. The boundaries between the time intervals to which
filtering will be applied are specified in the left pane and separated by semicolons. These
boundaries can also be set using picks from the database corresponding to the data being
processed (From pick -> Add).
• Tapering length– length of tapering window, ms.
Filter type:
• Simple Bandpass filter – applies a simple trapezoidal bandpass filter. You need to specify four
frequency values in Hz in the filter parameter selection field.
• Ormsby Bandpass filter – applies an Ormsby bandpass filter. You need to specify four
frequency values in Hz in the filter parameter selection field.
• Batterworth filter – applies a Batterworth filter. You need to specify two frequency limits in
Hz and two slope values in dB/oct.
• Notch filter – applies a trapezoid rejection filter. For this filter, you also need to specify four
frequency values in Hz.
A set of frequencies defining the bandpass is specified for each window. Parameters for the adjacent
windows are separated by semicolons.
Time window
(0 – 300 ms)
Time window
(300 - 600 ms)
Time window
(600 - 1000 ms)
Time-Frequency Representation
The module is designed to convert a trace to time-frequency domain.
Output data: a set of trace ensembles representing spectrum change with time for each trace .
Parameters:
• Window length— time window length in milliseconds within which amplitude spectrum is
calculated.
• Time step— a step between two consecdeutive windows. It is important to note that the step can
be either smaller or larger than the window length.
• Tapering— number of samples smoothed at each edge prior to calculation of the amplitude
spectrum inside the window.
• dB scale— if the parameter is active (Yes), the traces will be in the normalized scale.
Result of Time-Frequency Representation module in Screen Display module window. Frequency is measured
on the vertical axis, time window position is measured on the horizontal axis.
Time-Variant Amplitude Gain module
The module allows to normalize the amplitudes in the windows indicated by the user. Due to this, it
becomes possible to bring different parts of the seismic section to one amplitude level. Calculation of
the normalizing coefficients is done for each trace separately.
Parameters
Horizons (ms)– the list of horizons dividing the windows for normalization.
The possible ways to enter the horizons for the normalization window
The figure with an example of window selection is provided below. In this case, the two pickings
specified in the Horizons field divide the cross section into 3 parts, in which the normalizing
coefficients will be calculated:
1) above the red picking
2) between the pickings
3) below the yellow picking
2
3
Example of window selection on the seismic cross section
ATTENTION! The module sorts the horizons by time for each trace, so the overlapping horizons can
be specified in the parameters. The order of application of user coefficients, therefore, does NOT
depend on the order of setting horizons.
Normalize using – select the way of normalizing. The following options are available:
• RMS
When this method is selected, all the amplitudes in the window will be divided by the root-
mean-square amplitude of the window;
∑ 𝒂𝟐𝒊
𝑅𝑀𝑆 = √
𝒏
• Maximum value
In this case, all the amplitudes inside the window will be divided by the maximum window
amplitude.
• Custom coefficient
Allows to specify your coefficients to change the amplitude. In this case, the amplitudes
will be multiplied by the coefficient defined by the user. The coefficients are entered in
the Coefficients window, which is only activated when this normalization method is
selected. The coefficients are entered after a colon.
Tapering length (ms) - the length of the interval for smoothing the normalizing coefficients between
the windows, in ms.
Python Proxy
Python Proxy is a flow module that allows you to work with traces and their headers using the Python
programming language. There are three objects from the flow to the input of the module:
These objects allow you to retrieve amplitude and header values for any trace, perform necessary
operations on them using the Python and output the result to the flow. The Python Proxy module
imposes almost no restrictions on the complexity of the applied procedures, and you can import any
Python library into the script to solve the task you are working on.
Preparatory steps
NOTE: we strongly recommend that you first install the latest version of Python on your computer (see the step-
by-step instructions below) and only then install a new version of RadexPro. In this case RadexPro will
automatically integrate the Python environment into the Python Proxy module during installation.
Before working with the Python Proxy module, you need to have Python version 3.* installed (earlier
versions are not supported) as well as the NumPy library.
Installation window 2:
Installation window 3:
4) Although the latest version of the Python interpreter is installed from the website, you will
need to manually update the pip package manager. This can be done as follows:
1) Run the command line as admin
2) Browse to the folder with Python installed using cd:
cd C:\Program Files\Python310 (example, or the path where you installed Python).
3) Run the installation and upgrade of the pip package manager using the following
command:
python.exe -m pip install --upgrade pip
4) After installing the package manager, navigate to the Scripts folder using the command:
cd C:\Program Files\Python310\Scripts
5) Install the numpy library using the following command:
pip install numpy.
6) Reboot your computer.
7) Install RadexPro.
Module parameters
The module window is shown in the figure below:
In the window you can activate 2 modes:
1) Python script text - use Python program code written in the module text box.
NOTE: you can use replica variables within the text box.
2) Python script file - use Python program code from the specified .py file.
NOTE: you can use replica variables in the filename. Using replica variables inside a script
stored in the file is not supported.
Process all headers - if the option is not activated, only those headers whose names are used in the
script will be passed to the module. If the option is activated, then all headers from the flow, not only
those represented explicitly in the script, will be passed in the headers argument (see Input arguments
for exec function).
Modify headers only - if this option is enabled, the module will load only headers from the flow,
which will significantly increase the speed of execution. It is not possible to work with traces in this
mode
• traces - input two-dimensional array of traces in NumPy.array format of float32 type. The
first index of the array is the trace number, the second index is the reference number. In order
to obtain the number of traces and the number of samples inside it, you can use the following
constructions:
n_traces = traces.shape[0] - number of input traces.
n_samples = traces.shape[1] - number of samples in the input traces.
• headers - input two-dimensional array of headers in NumPy.array format of float64 type. The
first index of the array is the number of the trace to which the header refers, the second index is
the sequential number of the header. In order to understand which header of the project refers
to which index, the header dictionary is used (see the next paragraph). To get the number of
headers used in the script you can use the following construction:
• headers_dictionary is a dictionary matching header name and its index inside headers array.
For example, {"AAXFILT": 1, "AAX": 2 .... "PICK1": 21, ....}. Note that header names are
case-sensitive, so you need to use headers strictly in the case that they are specified in the
project database. In order to get the index of the required header within exec, use the get()
method:
NOTE: traces, headers, header_dictionary objects are generated automatically by RadexPro when
data is received by the Python Proxy module. The user only needs to ensure that the user sets 3
arguments for the function: exec(traces, headers, header_dictionary).
NOTE: For the module to work correctly, all child processes that were running while executing code
inside Python Proxy must be stopped by the time the module's results are unloaded to the flow.
Outputting Python Proxy results to the flow
Two options are possible when outputting data:
1) A tuple from the input arrays traces and headers is returned to the flow with the internal values
changed, but without changing the arrays' dimensions.
2) The flow receives a tuple of traces and headers arrays which are different in size from the input
arrays. In this case, the module will unload data into the flow only if the following conditions
are met
output_traces = numpy.array((new_number_of_traces, new_number_of_samples),float32)
output_headers = numpy.array((new_number_of_traces, n_headers), float64)
In other words, the number of traces should be equal to the number of header sets and the type
of values inside each two-dimensional array should not differ from the accepted float32 for
traces and float64 for headers.
If the conditions are not met, execution will end with an error in the program log.
Examples of implemented scripts
Example 1. A script demonstrating the ability to edit amplitudes:
import numpy as np
Example 2. A script to demonstrate the header and data visualisation capability of Python
Proxy.
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
plt.figure('Source Map')
plt.title('Source Map')
plt.scatter(sou_x_rotated, sou_y_rotated, s=1, c=sou_elev)
plt.scatter(sou_x, sou_y, s=1, c=sou_elev)
plt.show()
The script uses the external library Matplotlib to visualize source positions. Also note that the logic for
calculating coordinate after rotation is placed in a separate function coord_rotation(x, y) and is called
inside the exec function.
The result of the script looks as follows:
When the window is closed, the script execution moves to the return function, which will return the
unchanged traces and headers objects to the flow.
Q Filtering
This module conducts the modeling or compensation of attenuation-related effects on seismic data. It
can model or compensate for amplitude and phase effects separately or simultaneously. Q filtering is a
trace-by-trace operation. The module can be applied to post-stack data or pre-stack data. Depending on
the model of the subsurface, one can consider introducing different types of NMO correction before Q
compensation for pre-stack data (e.g., Xia, 2005). The module does not introduce any NMO
corrections internally.
Parameters
Q filtering direction – a switch which can be set to Inverse for Q compensation or to Forward for
modeling of Q effects.
Filter mode – a switch which selects whether the algorithm compensates/models Only Phase, Only
Amplitude, or both Phase and Amplitude.
Header #1/2 for Q model setup – two headers which define the spatial interpolation of the Q model.
These headers are used for the linear interpolation between the models set by the user in the third
column of the Q model table. Common options are CDP for 2D seismic and CDP_X/CDP_Y or
ILINE_NO/ XLINE_NO for 3D seismic, although other options are possible.
Q model – a table where the user can set multiple one-dimensional Q models. Interpolation between
these models is performed using the headers defined above. Note that the user needs to set local
(“interval”) Q values here.
Header for Q model zero time – an optional parameter which defines a header for setting the Q
model relative to a specific time. If this option is selected, then the Q model is set relative to the time
defined in this header. Area above this time is filled with a high value of Q corresponding to the
absence of attenuation. For example, if the header is set to WB_TIME with water bottom time equal
to 340 ms, and the model is 0-1000:100,1500:200, then in the actual Q model everything shallower
than 340 ms will be filled with high Q values, for the times from 340 to 1340 ms Q will be set to 100,
for the times from 1340 to 1840 ms Q will be interpolated linearly between 100 and 200, for larger
times Q will be extrapolated.
Reference frequency, [Hz] – sets the reference frequency for Q compensation or modeling. Q
compensation corrects for the dispersion caused by attenuation. Due to dispersion, different
frequencies travel with different phase velocities. The Q compensation removes the dispersion by
setting the phase velocity of each frequency equal to the phase velocity of Reference frequency, [Hz].
This parameter ultimately sets the amount of time shifts of events on seismic gathers because of Q
compensation. If the time shifts are undesirable – one can set this frequency to the approximate
dominant frequency of the seismic wavelet.
Time window size, [ms] – the size of the time window used for short-time Fourier transforms. It is
suggested to pick a time window which is larger than the seismic wavelet, but small enough to
accommodate the time-variant changes in the Fourier spectrum caused by attenuation. This is set by
trial and error in the interval 1-10 times the length of the seismic wavelet.
Time window step, [ms] – the step between the neighboring windows of the short-time Fourier
transform. This mainly influences the computational efficiency of the module. The most accurate
results are acquired with Time window step, [ms] equal to the sampling interval of the seismic data.
Increasing the time window step can significantly reduce the execution time with minor/unnoticeable
changes to the compensation/modeling accuracy. Time window step, [ms] needs to be smaller than
half of the Time window size, [ms].
Number of threads – sets the number of threads for parallelization. Setting the number of threads to 0
utilizes all available resources.
The module implements Q filtering in short-time Fourier transform domain similar to the algorithm
suggested by Wang (2006). The algorithm conducts a Fourier transform in a set of windows (with
window sizes set by the user), computes and applies a Q filtering operator for each window, and then
conducts the inverse short-time Fourier transform. One can run forward or inverse Q filtering for
phase, amplitude, or both.
The filter is regularized, the regularization only influences the amplitude spectrum of the operator, so
that the phase correction is always accurate. The regularization is implemented so that for high
frequencies the amplitude compensation operator tends to 1, preserving the energy present in the
original dataset and avoiding noise amplification (as explained by Wang (2006)).
Note that one can still set the regularization coefficient for the forward filtering, although the forward
operator is always stable even without it. For accurate forward filtering, the regularization coefficient
needs to be set to 0. The capability to set the regularization in the forward filtering was added so that
the user can undo the results of the previously applied inverse Q filter. Say, the user is working with
the dataset with the regularized inverse Q filter applied. If required, they can apply the filter with
exactly the same parameters in ‘forward’ mode to bring the attenuation effect back.
Examples
The Figure below shows an example of the algorithm’s application to a synthetic dataset which
consists of 10 traces containing a set of Ricker wavelets at regular intervals. It can be observed that the
attenuation results in phase shifts and a decrease in amplitude at later times. The inverse filtering
reconstructs the original wavelet shape, compensating for both phase and amplitude.
Example application to a synthetic dataset
The following Figure shows an example of a field data application. The Q compensation operator
amplifies the seismic data at later times, making the amplitude distribution more even along the time
axis. The phase correction is also present, although hard to observe in the Figure.
References
Wang, Y. (2006). Inverse Q-filter for seismic resolution enhancement. Geophysics, 71(3), V51-V60.
Xia, G. (2005). Proper time-referencing for prestack Q estimation and compensation. In 67th EAGE
Conference & Exhibition. European Association of Geoscientists & Engineers.
Geophone -> DAS Conversion
This module conducts the conversion of particle velocity geophone data to strain rate averaged within
the gauge length, which is typically output by distributed acoustic sensing (DAS) systems. The method
applied here is outlined in the paper by Zulic (2022). The same module can be used to convert
displacement seismograms to averaged strain. The module is a spatial filter which operates on
ensembles of traces, which, in most cases, need to be shot gathers.
NOTE: Depending on the polarity of the input data, the user may need to apply a polarity reversal to
the processing result to enable a comparison with the field DAS data. Polarity reversal can be
conducted with the Trace Editing or Trace Math modules.
Parameters
Distance between input traces, [m] – trace spacing in the input gather (the module can only handle
seismic gathers with constant trace spacing).
Gauge length, [m] – the desired value of modeled gauge length. Note that the desired gauge length
needs to be at least two times larger than the Distance between input traces.
The module conducts a conversion of geophone data to strain rate according to Zulic et al. (2022)
using the following expression:
𝐺 𝐺
𝑣𝑧 (𝑧+ ,𝑡)−𝑣𝑧 (𝑧− ,𝑡)
𝜀̇𝑧𝐷𝐴𝑆
𝑧 (𝑧, 𝑡) = 2 2
.
𝐺
Here, 𝑧 is the spatial coordinate, 𝑡 is time, 𝐺 is the gauge length, 𝑣𝑧 is the input geophone data, and
𝜀̇𝑧𝐷𝐴𝑆
𝑧 is the output strain rate averaged by DAS.
The algorithm assumes that the input data is sorted into shot gathers. Note that the module assumes
that the input geophone gather is equal to zero outside of the ensemble boundaries, so it generates edge
effects, which are particularly visible for large gauge lengths.
Example
The Figure below shows a comparison of a field geophone gather, a DAS dataset acquired in the same
spot, and the Geophone -> DAS conversion result.
The input dataset is a VSP survey acquired with both DAS and geophones which was published as an
open-source dataset at Research Data Australia by Zulic et al. (2022) (https://doi.org/10.25917/7h0e-
d392). This dataset was provided under a CC BY 4.0 license. More details can be found at
https://creativecommons.org/licenses/by/4.0/.
References
Zulic, S., Sidenko, E., Yurikov, A., Tertyshnikov, K., Bona, A., & Pevzner, R. (2022). Comparison of
amplitude measurements on borehole geophone and das data. Sensors, 22(23), 9510.
Data Enhancement
Theory
This module performs adaptive subtraction of one wave field from another. The main feature of the
algorithm is its capability to make adjustments for possible gradual amplitude distortion along the time
coordinate. A shaping filter is calculated using the least square method in such a way as to make the
filtered subtracted field as close to the source field in terms of minimum quadratic deviation as possible.
This filter is essentially a multi-channel filter. Additional channels for its calculation and subsequent
folding are taken from the source trace by adding parametrized time non-stationarity to it.
Method theory
The adaptive wave field subtraction method consists in finding a continuous time shaping Wiener filter
𝑔̂(𝑡) in the set of Wiener filters −𝑔̃(𝑡) to satisfy the following condition:
(1)
where p(a,b,t) is the source trace of the wave field being reduced, a and b define the spatial coordinates
of the trace, and l(a,b,t) is the trace of the wave field being subtracted.
The implemented algorithm allows making adjustments for possible discrepancies between the p(a,b, t)
and l(a,b,t) traces (including nonstationary t discrepancy), but does not have enough degrees of freedom
to make adjustment for arbitrary differences.
A traditional method of accounting for nonstationarity in adaptation tasks is a “local stationary” (i.e.
stationary within the window) multi-window filtration that involves breaking the trace down into
intervals:
(2)
where i is the window number, with its upper and lower boundaries serving as limits of integration Ti.
Drawbacks of this method lie in the very task of making adjustments for nonstationarity by breaking the
trace down into intervals within which such effects are considered insignificant. As a result, filters
generally have to be set up in small windows; this considerably degrades the statistical properties of
evaluation and leads to a potential usable reflection energy decrease during subtraction. Besides, such
parametrization of nonstationarity nature is described by uneven filter changes at window boundaries.
By introducing matrix designations for the filter set , expression (1) may be presented as follows:
(3)
where T is the trace length and wi (t) is the step function. To eliminate uneven change of filters, the
number of the shaping filter’s degrees of freedom should be reduced while maintaining its
nonstationarity, i.e. the filter should be allowed to make adjustments for possible dynamic variations,
but not in an arbitrary mode. Use of smooth functions for subtracted trace weighing is suggested to
eliminate the drawbacks inherent to the traditional approach.
In this case a substitution of step functions wi (t) with, for example, polynomials wi(t) = ti can be used.
After that, the optimization task comes down to calculating a nonstationary, gradually variable shaping
filter for the entire trace. The procedure is technically implemented as follows. First the subtracted trace
l(a,b,t) is represented as a weighted set of smooth functions wi (t), resulting in a set of channels li (a,b,t)
= l(a,b,t)wi (t). Than adaptation is carried out by calculating a continuous time shaping filter for each
channel li (a,b,t).
To increase reliability of filter calculation (especially for traces with high amount of interference),
calculations should be performed taking adjacent traces into account too, – for example, l(a,b + Δx,t),
l(a,b − Δx,t) etc., where Δx is the lateral increment. In this case the task of adapting the subtracted wave
field to the source data is reformulated as follows: it is also necessary to find minimizing functionals
(4)
which leads to Levinson algorithm for multi-channel filters. The block Toeplitz matrix obtained as a
result of minimization can become ill-conditioned, which is characteristic of all tasks of optimum spatial
filtering. To avoid this, let us find a separate optimum multi-channel filter by variable i for each trace in
the database (b −MΔx,b +MΔx), obtaining the following as the result:
(5)
The procedure is repeated several times with p(a,b,t) in the last expression substituted with the
subtraction result from the previous iteration until the minimum of functional Ja,b () is reached.
Recalculation for a larger sampling interval (or, to be more precise, “band transformation” or
resampling) presents another reserve to draw upon in order to improve the task conditionality. In this
case the lower limit frequency of the source trace operating band will correspond to Fourier spectrum
zero frequency, while the upper limit frequency of the band will correspond to Nyquist frequency.
Further improvement of algorithm stability in terms of increasing the regular noise suppression depth
while maintaining the signal dynamics can be achieved by using spatial smoothing
(7)
i.e. averaging of functional based on (2A+1) traces. In this case the optimum adaptive filter is determined
by minimizing the functional Ya,b (с). If such procedure modification is implemented, the filter will be
estimated for a group of traces, resulting in regularized solutions.
Parameters
As a rule, the raw field is considered to be the minuend field, while the multiples’ field,
constructed using a tricky approach, is considered to be the subtrahend. The given processing mode is
organized to input traces’ pairs, the first trace is related to the raw field, while the second trace of the
pair – to the subtrahend field. Thus, the traces of the subtrahend and the minuend fields, sorted by the
main sort field are fed to the processing flow (for example, DEPTH) and within this sorting type they
are sorted by field, defining the affiliation to the field type (for instance minuend or subtrahend,
TRC_TYPE).
The procedure can be used in iterative mode, while subsequently subtracting several fields (for
example, several modeled fields of multiples from different boundaries). The number of subtracted
fields is set by the Number of models parameter.
In such a case, both subtrahend fields are considered during the calculation of the shaping field using
the least squares technique, which is different from the successive subtraction of each subtrahend field
in separate mode.
Processing windows:
Considering the nature of the records, sometimes the processing should be done in different
windows individually. The procedure supports the division by windows using picks/headers as
processing windows’ boundaries.
• If there is no boundary in the list, we conclude that there is only one window from the start to
the end of the trace.
• If only one pick/header is assigned, the whole processing area is divided into two windows: 1)
from the start to the pick/header, 2) from the pick/header to the end of the trace.
• If you assign two picks/headers, we have three processing windows respectively, etc.
The boundaries of windows are added/removed using the buttons Append item ( ), Remove (
), the current set of boundaries is represented in the list. Reset ( ) allows you to return to the
default settings. In the window that appears, you can set the boundaries using the header ( ), pick
( ). The user should insure that the boundaries do not intersect, since the program behavior is
unpredictable over intersecting boundaries. As well you shouldn’t use windows that are too narrow, as
there will be subtraction to zero which is not good. However, sometimes it is useful to set a narrow
window and mark it as non-active (i.e., there will be no subtraction in this window); in such a way you
can perform a so-called muting of areas that are unsuitable for subtraction.
Multiplication parameters. The given parameters set describes the characteristics of the basis functions
that contribute to multiplication of subtrahend field traces. In particular the procedure algorithm includes
the calculation of the multichannel shaping filter. More particularly, the assignment of each
multiplication function adds a channel, formed by the multiplication of the raw trace by function 𝑡 𝛼∙𝑛 ,
• While alpha – is the so called Exponent parameter, used for fine tuning, and usually is equal
to 1. Thus, without consideration of the Exponent parameter.
• While n– is the so called Number of basis function. We can say that if the Number of
basis function is 0, the shaping filter is considered to be single channel; if this parameter is
equal to 1, the trace, multiplied by time is added to the shaping; if it is equal to 2, a trace with
squared non-stationarity is added etc.
The more we assign multiplication functions the better we subtract one field from the other; but we have
to take care, as in this case we can subtract undue value from the raw field. You should come to a
compromise, and the number of multiplication functions is selected individually proceeding from the
processor’s experience in each case. You should also keep in mind that the extent of subtraction depends
on the other parameters, particularly the filter length and the length of the working window. The more
the filter length and the less the window length, the better we subtract (we imply that we, possibly,
subtract undue value).
Subtraction parameters.The given set of parameters characterizes the parameters of the shaping
filter:
• Window use – sets the sign of whether to perform subtraction in the given window or to keep
the raw field unchanged. (1 – subtraction in the window should be performed, 0 – should not be
performed).
• Filter length – sets the length of shaping filter, in samples.
• Filter zero position – sets the number of zero filter samples, i.e., determines the ratio of
predicted and reminiscent filter components.
• White noise level – regularization parameter. Fractional additive to the main diagonal of
autocorrelation matrix while solving the system, (for some reason this is sometimes called
“white noise level”.)
• Window top tapering – the width of the smoothing interval between windows for which
subtraction operators are calculated. The number of samples from the top of the window.
• Window bottom tapering – the width of the smoothing interval between windows for which
subtraction operators are calculated. The number of samples from the bottom of the window.
• Shift model – set to Yes (1) if the subtracted model should be shifted to the maximum cross-
correlation between the current data trace and the current model trace. Such a shift can increase
the efficiency of subtraction in the case when the coherence of reflections is broken, for
example, by sea waves
• Max model shift up – the maximum possible model shift upwards relative to the initial
position in samples
• Max model shift down – the maximum possible model shift downwards relative to the initial
position in samples
Band transform
You can perform filtering, within the full bandwidth as well as within a limited bandwidth. The
button Band transform is designed to toggle between the modes. When you use the mode of limited
bandwidth, a kind of re-sampling, or, more precisely, a band transform of the data is carried out before
the subtraction in such a way that the specified bandwidth is projected into the whole available
bandwidth. After the subtraction, an inverse band transform is applied to the result. Due to this
procedure, the subtraction is performed within the limited frequency band and the remaining frequencies
are filtered out. This stratagem is used because the algorithm is designed for the work in full frequency
range, while practically speaking it is preferable to use a limited bandwidth. Low frequency and High
frequency parameters assign low and high frequencies of the used bandwidth, correspondingly.
• Number of traces sets additional channel number in spatial coordinate. It is useful to consider
not only the subtracted trace but also adjacent traces while forming the filter and performing
subsequent subtraction. This parameter sets the arm of the adjacent traces’ set window. That is,
when it is equal to zero, the adjacent traces are not used in the calculation; if it is equal to 1, then
3 traces are used: the central trace and each adjacent trace at each side, etc. As adding new
channels is linked with some difficulty, another algorithm for reporting adjacent traces is created.
Namely, first we calculate the filter for each spatial channel, the subtracted traces are filtered,
and then the coefficients for each spatial channel are matched in order to minimize dispersion
using a special algorithm (Kholetsky algorithm). Then each filtered channel is subtracted from
the raw trace using the corresponding coefficient. In order to increase the method efficiency a
notion of iteration is introduced. Namely the given procedure is performed several times. The
filter for every following iteration is calculated on the basis of new cross-correlation matrix. A
comparison of trace energy difference before and after iteration is performed after every iteration;
if it exceeds the calculation accuracy, the cross-correlation matrix is recalculated and transition
to the next iteration is performed.
• Max number of iterations parameter sets the maximum number of such iterations. After having
achieved this number the iteration cycle is automatically stopped.
• Filter averaging base – another way of allowing for adjacent traces. It sets the arm of the
averaging base of autocorrelation and cross-correlation matrices during filter calculation. In other
words, the shaping filter is averaged by a certain number of traces. Let’s point out that filter
averaging can be done simultaneously with spatial channels.
Other parameters:
• Accuracy – accuracy of calculation. Parameter is used while comparing the traces’ energy
before and after filtering.
Multi-threading computations:
Mutli-threading in Wave-field subtraction module is organized by ensembles. That is each thread process
one ensemble at the moment. Several threads can process several ensembles in parallel. User’s task is to
feed them by ensembles.
This module is meant to accomplish various types of two-dimensional spatial filtering. When this
module is activated the following window appears:
Parametrs
In the Filter type field, select the type of filter:
• 2-D Mean - this algorithm averages the samples within the bounds of filter application
window;
• 2-D Median - this algorithm sorts the samples within the bounds of filter application window
and generates median (central) sample of the sorted set.
• Alpha-Trimmed Mean - this algorithm sorts the samples within the bounds of filter
application window and averages the range of values centralized regarding to the median. This
process is equivalent to the process of culling of samples which are beyond the bounds and
smoothing the rest of them.
Next, specify the size of window for filter application, i.e. its operator horizontal and vertical
dimensions.
Filter size
• Number of traces for 2-D filter - number of traces to be used as the width of the operator of
two-dimensional spatial filter. This value must be an odd integer.
• Number of samples for 2-D filter - number of samples to be used as height of the operator of
two-dimensional spatial filter. This value must be an odd integer.
There are two modes of filter application. The first mode (normal) substitutes the sample value in the
center of operator of two-dimensional filter for calculated value. In the second mode in the center of
operator of two-dimensional filter calculated value is subtracted from the input value thus preserving
the "differed" dataset.
In the Application mode for 2-D filter field, specify the spatial filter mode:
• Normal - allows replacing of the initial value in the center of the two-dimensional filter operator
by calculated value.
• Subtraction - allows subtracting the calculated value from the input value in the center of the
two-dimensional filter operator. It is usually applied for culling of energy of some coherent
waves.
• Rejection percentage for spatial filter (this option is available only when Alpha-Trimmed
Mean filter is selected) - in this field, specify the percentage of input samples of the window of
twodimensional special alpha-trimmed mean filter which will be discarded before averaging
the rest of the samples. The 40% value means that the lowest 20% of the samples and the highest
20% of the samples will be discarded while filter calculation. When the 0% value is entered, the
filter at the output will give the mean sum whereas the 100% value will give the median filter.
WARNING!: Owing to the median filters nature, when using the median filter the accidental gaps in
values of samples may occur and they are the result of shifts in family of samples taken place while
process of sorting. It is recommended that the median filter is followed by the bandpass filter with a
wide bandpass in order to eliminate these calculation gaps.
F-K Filter
The module is designed to filter data in the F-K (frequency-wave number) area. The user specifies an
area on the F-K plane, which is subsequently cut or skipped by the filter. The area can be set as a N-gon
or a fan filter (FAN).
Before adding the module to the flow, you need to create a filter polygon. It is defined interactively in
the Screen Display module window. You can view the two-dimensional F-K spectrum of a selected data
fragment, construct a polygon for filtering in the F-K domain, preview the filtering results using the
selected fragment as an example, and modify the parameters on the fly. After creating the polygon and
choosing filtering parameters for the data fragment you can proceed to apply the filter to the entire data
set in the flow using the F-K Filter module.
IMPORTANT: For the module to function correctly, the distances between the traces need to be
approximately the same.
Two-dimensional (spatial) filtering algorithms are used to suppress noise waves or identify wanted
waves with known properties. Spatial filtering is based on transferring the data to another domain (such
as the frequency-wavenumber domain) by applying a mathematical transformation to the traces, filtering
noise waves in that domain, and then converting the results back to the (t, x) domain.
The fk filter is an example of two-dimensional filters. It is based on twofold Fourier transform – a method
of expansion the wave field into plane wave components. Each plane wave carries a monochromatic
signal which propagates at a certain angle to the vertical. Events with the same inclination in the (t, x)
plane are located on the same line in the radial direction in the (f-k) plane regardless of their position. As
a result, for example, interfering signals inclined in the (t, x) plane may have no interference and may be
successfully separated in the (f, k) plane by their inclination values. This allows eliminating the energy
corresponding to noise waves (coherent line interference in the form of surface waves, channel waves,
laterally dissipated energy obscuring true reflections) from the data. These types of noise are usually
isolated from the reflected energy in the (f, k) space.
(Yilmaz, 1989).
Module parameters
• Polygons – a polygon created in the two-dimensional spectrum window of the Screen Display
module.
▪ Mirror polygons – polygon mirroring relative to the F-axis
▪ Polygons – selection of a polygon from the database. Polygons are added/removed by
Append item ( ), Remove ( ) buttons, respectively, and the Reset ( ) button allows
you to return to the default settings. In the window that appears, it is possible to choose
a polygon from the database ( ), or specify it with the help of the replica system (
). You can also add several polygons at once.
• Fan – the range of frequencies, which will be filtered by the F-K filter. The range shall be
specified in the Fan window in the following format: dip1,dip2,f1,f2/dip1,dip2,f1,f2 (m/s,Hz)
• Reject positive wavenumbers– cut out the area of positive wavenumbers
• Reject negative wavenumbers – cut the area of negative wavenumbers
Operation mode – filter mode (not active in case of the Reject positive (negative) wavenumbers
mode)
• Reject – cut the specified range of spatial frequencies from the input data
• Pass - leave the spatial frequencies within filter's range, and cut all other spatial frequencies
Rejection coefficient - a cut-off coefficient by which the amplitude in two-dimensional filtered area is
multiplied
• Numeric multiplier – a real number from 0 to 1 acts as a coefficient; it is set in the Coefficient
field (0 means suppression of the filtering area within the whole range of amplitudes; 1 means
leaving the data in the filtering area unchanged)
• Decibels– coefficient value, expressed in decibels. (Based on the definition of the logarithm, it
follows that if you set a value expressed in dB equal to 0, the data in the filtering area will be
skipped unchanged. At a value of 10 dB, the amplitudes will be suppressed in the filtering area).
At the bottom there is a table showing the relationship between the cut-off coefficients expressed
in different units.
0 >10
0.5 6
1 0
Use ensembles – operation is performed within ensembles. Ensembles are determined by the headers
of the Trace Input module.
Taper window width – the width of the taper window as a percentage of the area of the N-gon in
which the amplitudes will be smoothed.
Number of threads– to increase productivity and speed of calculations, you may enable the option to
parallelize the calculations. In order to do this, specify the number of cores used by the module
Viewing the F-K spectrum and setting up the filtering parameters
Viewing the FK spectrum and creating filter polygons is performed in the Screen Display module.
Now that the filter has been set interactively and saved to the database, you can use it to filter the data
in the processing flow. To do this, add the F-K Filter module to the flow.
Select the polygon N-gon in the Type area to set the filter area from the project database. To do that,
press the button and then to select the polygon created in the 2D spectrum window. It is also
possible to add several polygons:
In addition to the method described above, the filter area can be specified using a fan filter (see figure
below). The range shall be specified in the Fan window in the following format:
dip1,dip2,f1,f2/dip1,dip2,f1,f2 (m/s,Hz)
After setting the filter area as an N-gon or fan filter, you must specify the filter mode (cut or skip) and
define a fixed pitch between traces either by using the two header fields or manually. These dialog box
parameters correspond to the parameters of the interactive filter assignment window discussed above.
F-X Predictive Filtering (F-X Deconvolution)
This module is used to suppress random noise both on trace ensembles and stacked sections.
Principle of operation
The F-X predictive filtering procedure is based on prediction of linear events in the frequency-space
domain.
A linear event in the time domain described by the expression f(x,t) = δ (a+bx-t) is transformed using a
Fourier transform into f(x,ω) = eiω(a+bx) or f(x,ω) = eiωa(cos(ωbx)+isin(ωbx)) in the frequency domain.
For simple linear events this function is a periodic function by x. This periodicity (sinusoidal complex
signal along a certain frequency section) can be traced along any constant frequency (frequency section)
in the frequency-space (f-x) domain.
The F-X predictive filtering procedure uses a complex Wiener filter to predict the signal for one trace
ahead. The Wiener prediction filter is calculated for spatial series obtained at each frequency by means
of the Fourier transform. Each prediction filter is sequentially applied in two directions (forward and
backward in space), and the results are then averaged to eliminate prediction errors.
Prediction in the f-x domain is applied in small windows to ensure that the assumption about the linearity
of events in the time domain is valid.
Parameters
Filter length, [traces] – number of samples in the prediction filter.
White noise level, [%] – percentage of white noise added to the zero delay of the autocorrelation value.
Default value is 1%. Increasing this value eliminates more noise, making the data “smoother”.
Horizontal window, [traces] – number of traces in the horizontal F-X prediction window. The value
must be greater than the number of filter samples.
Time window overlap, [ms] – amount of time window overlap. Is added at the top and at the bottom of
the time window for “vertical mixing” purposes.
Divide by ensembles – the F-X predictive filtering function will be applied within ensembles.
Ensembles are determined based on the Trace Input module headers.
Mute hard zeros – if the data contain zero trace values resulting from muting, those values will remain
zero after the F-X deconvolution procedure.
Number of threads – number of threads the process is subdivided into when using the module. It
ensures the module work acceleration. Its maximal value should match the number of computer
kernels.
TFD Noise Attenuation
The module is designed for attenuation of noises localised in frequency domain and possibly in time
domain as well. It allows to remove local narrow frequency band noises without affecting the spectrum
of the remaining record.
Brief theory
1. For each trace of a seismogram in the indicated time window the amplitude spectrum is
computed. The time window width identifies the amount of frequency samples that the amplitude
spectrum is subdivided into (the smaller the window width, the less the frequency samples
amount (the bigger the frequency sampling interval).
2. For the whole seismogram (or specified ensemble of traces), the median value is computed for
each frequency sample.
3. The median value of the received medians multiplied by the specified multiplier is taken as a
threshold value for the whole seismogram.
4. Every frequency sample is compared with the indicated threshold value. If the value in the
current sample exceeds the threshold one, it is replaced with the average value computed on the
basis of the traces set indicated in the Trace Aperture parameter and within the same frequency
band.
5. The multiplier parameter (Threshold Multiplier) allows a user to control the threshold value as
it makes it possible to avoid too «strong» (with too low value) or insufficient (with too high
threshold value) amplitude balancing.
Threshold value calculation scheme for a seismogram (items 1-3 of the description) is shown
below:
2 3
median threshold value
median M
median
Frequency samples
Parameters
Time-Domain Options:
• Processing time intervals, [ms] – start and end recording time (in milliseconds), to which the
noise attenuation procedure is applied
• Time window width, [ms] – time window, in which the amplitude spectrum for every trace is
calculated
• Tapering, [%] – this parameter specifies the window width on the borders of the indicated
time window, in which the amplitudes are measured linearly (or according to the Gauss’s Law,
in case the Gauss Taper is selected) from 0 to 100%. Tapering is defined in percentage of the
time window width. Tapering gives an opportunity to avoid border effects on the borders of the
windows.
Frequency-Domain Options:
• Processing frequency interval, [Hz] – minimum and maximum frequencies in Hz to be
involved in the noise attenuation process.
Frequency band options limitation allows for reducing the span time for calculations. Before
limiting frequency bands, it is recommended to filter the corresponding band using Bandpass
Filtering module.
• Aperture, [traces] – number of traces, according to which an average value of noise samples
replacement is calculated.
• Threshold – a multiplier that allows for the control of the threshold value calculated from a
set of traces. It makes it possible to avoid too «strong» (with too low value) or insufficient
(with too high threshold value) amplitude balancing.
• Use muting – if, before the frequency alignment procedure, a seismograph contained zero
values (which are supposed to represent the muting result), the given operation retrieves the
zero values to the same positions, if they were replaced as a result of the module work.
• Divide by ensembles – function will be applied within ensembles. Ensembles are determined
based on the Trace Input module headers.
• Number of threads – number of threads the process is subdivided into when using the
module. It ensures the module work acceleration. Its maximal value should match the number
of computer kernels.
Spectral shaping
This module is used to alter the shape of the input trace signal amplitude spectrum.
Parameters
• Whiten shaping — defines the signal amplitude spectrum according to the specified shape,
leaving the phase spectrum unchanged. The filter is created by specifying the
Frequency:Amplitude value pairs. The amplitude values are specified in percent.
The spectrum resulting from the application of the filter will be flat in the 10-55 Hz
frequency range, will rise from 0 to 100% in the 5-10 Hz range, and will drop from 100% to
0 in the 100-115 Hz range.
• Amplifier filter – as a result of this procedure the current amplitude spectrum of the
incoming traces will be multiplied by the contour specified in the shape parameters without
changing the phase spectrum. The contour shape is defined in a similar manner – by
specifying the Frequency:Amplitude value pairs.
• Frequency AGC – this procedure performs automatic gain control in the frequency domain.
The gain factor is calculated for each position of the window with the specified length in Hz
sliding across the frequencies of the amplitude spectrum as an average of the absolute values
throughout the window. The frequency step is obtained by dividing the Nyquist frequency
by the number of trace samples. This factor is then applied to the window's central value. The
amplitude spectrum variations are smoothed as a result of this procedure.
Start frequency – start frequency to which the gain control will be applied.
End frequency – end frequency to which the gain control will be applied.
• Window length – value defining the length of the window (in Hz) that will be used to
calculate the gain.
• White noise level – defines the level of white noise to be added to the signal amplitude
spectrum. Is specified in percent.
• Reapply trace muting – restores zero values in trace amplitudes (resulting from muting)
after running the filter.
Spectral whitening
The spectral whitening procedure is used to expand and level the frequency spectrum of the seismic data.
Each input data trace is converted to the frequency domain, multiplied by the specified number of
amplitude spectra, and then converted back to the time domain. The procedure produces a certain number
of traces corresponding to the number of frequency bands specified in the module parameters, i.e. each
trace is a filtered source trace in a particular frequency range. Automatic gain control (AGC) is applied
to the resulting traces in the time domain. Then the traces with the AGC applied and the AGC factors
are stacked to obtain the final trace and average AGC factor values. The true signal amplitude is restored
by dividing the resulting trace amplitude by the average AGC factor value.
Parameters:
• Manual design – manual specification of frequency bands. Four frequency values define one
frequency band: F1 = 0%, F2 = 100%, F3 = 100% and F4 = 0%. Six frequency values define
two frequency bands where the F3 and F4 frequencies are common for the first and second
frequency band. For the second frequency band F3 = 0%, F4 = 100%, F5 = 100%, F6 = 0% etc.
(see the screenshot).
• Automatic design – automatic division of the frequency band into the specified number of
bands. The F1=0%, F2=100%, F3=0% and F4=100% frequencies define the “common” pass
band. This band is divided into the number of bands specified in the Number of panels field.
• AGC operator length – value defining the length of the window (in ms) that will be used to
calculate the gain factor.
• Reapply trace muting – restores zero values in trace amplitudes (resulting from muting) after
running the filter.
Radial Trace Transform
Theory
The module is designed to transform traces from (x,t) domain into (v,t') domain and back following the
rule:
t'=t; v=x/t
Several filtering types (special and time) and different muting types can be applied to traces in V-T
domain.
The transform can be used for noise attenuation, that regards coherent noise with linear seismic events,
starting from the axes’ origin (for instance, direct and air waves etc.).
Parameters
The figure demonstrates a dialog box for parameters’ setting of the module Radial Trace Transform.
• Start apparent velocity – minimum apparent velocity (in m/ms or km/s, which is similar
• End apparent velocity – maximum apparent velocity (in m/ms)
• Apparent velocity step – step of search (step between the generated traces, in m/ms)
Inverse transform (Inverse). Inverse transform can be performed in two modes – into the set of offsets
specified by the user (in this mode a trace ensemble in x-t domain with the specified set of offsets will
be obtained from each radial traces’ ensemble). The following parameters are applicable for this mode:
It is assumed that the offsets are kept within the header field OFFSET of the raw data when performing
the forward transform, The traces are formed as a result of transform, with the header field OFFSET
containing apparent velocity. The other way round is done during the inverse transform.
In reference dataset traces grouped by… parameter, the same as the first sorting field of input radial
traces.
IMPORTANT! Each ensemble of input traces in any domain (x-t or r-t) should contain at least 2
traces.
Application
Let’s give a typical example of module application aimed at ground roll suppression. Let us assume
that the raw data have a header field TRC_TYPE = 1.
Forward transform:
1. Trace Input (FFID:OFFSET sorting, range OFFSET -0.35-0.35) <- Dataset B (OFFSET range
(where the apparent velocities are kept at the moment) is limited by apparent velocities of
ground roll)
2. Bandpass Filter - 0-0-8-12 (Hz) (pass only the ground roll)
3. Radial Trace Transforms
4. Inverse
5. Use reference dataset (Dataset C, group header FFID)
6. Trace Header Math
7. TRC_TYPE = 2
8. Trace Output → Dataset D
In this example, to distinguish the ground roll a band pass filtering (high cut filter) is used. However
instead of the band pass you may use some other 1D or 2D filters to pickup the field of noise in R-T
domain.
1. Trace Input <- Dataset A, Dataset D Parameters should control the traces sorting in pairs in
the flow: “minuend1”, “subtract1”, “minuend2”, “subtract2”...
2. Trace Math
3. Trace Output → Dataset E
This is a preferential kind of approach (when the noise model is generated, and subtracted from the
raw data), in comparison with the direct approach (when the noise is subtracted in R-T domain),
because an inverse transform can (though moderately) distort the data. Moreover, it allows editing the
noise model before subtraction.
Reference
Henley, D.C., 2003, Coherent noise attenuation in the radial trace domain. Geophysics, 68, p. 1408.
Radon Transforms
Theory
Forward linear discrete Radon transform of seismogram, containing N traces LL∗ with a range of
𝐿1
Where ∆pj = pj+1 − pj denominates an increment of ray parameter. The Fourier transform of the
given equations on time variable gives
𝐿1
𝐽1
In matrix notation (asterisk signifies operator contingency), the data is a result of inverse Radon
transform application d = Lm , then (Yilmaz 1994), we need to find m while minimizing the
following functional of a square norm
|| d − Lm||2
As the matrix LL∗ is very close to degenerate matrix, it is worth using a regularizing parameter µ ,
The realization of forward and inverse parabolic Radon transform is similar on the whole.
The transform can be used in order to eliminate different coherent noise with linear event (for example,
direct and air waves etc.).
Parameters
Figure illustrates a dialog window for parameters’ setting of Radon Transforms package.
• Linear
• Parabolic
Parameters of transform:
• Frequency interval, [Hz ] – frequency range (Hz) of the signal, in data subject to transform.
• Regularization
• Reference offset – offset (m), for which a range of moveouts will be indicated in case you choose
Parabolic mode. As a rule, it should be approximately equal to the maximum offset in
seismograms. Irrelevant in case of the linear Radon transform mode.
Tapering – ensemble edges’ smoothing (in space (Left, Right, traces (number)), as well as in time (Top,
Bottom, ms)). With the aim of distortion reduction, linked with the calculation of discrete Fourier
transform, the ensemble is multiplied by piecewise-cosine weight function.
It is assumed in the module, that in the raw data the offsets are kept in header field OFFSET when you
perform forward transform, as a result of transform the traces are formed with the header field OFFSET,
containing ray parameter (or corresponding moveout on a reference offset). The inverse transform is
done vice-versa.
IMPORTANT! Each ensemble of input traces in any domain (x-t or r-t) should contain at least 2 traces.
Application
The application of module is generally the same as the Radial Trace Transform module and is laid
down in the corresponding section of this Manual.
Mutli-threading in Radon transforms module is organized by ensembles. That is each thread process one
ensemble at the moment. Several threads can process several ensembles in parallel. User’s task is to feed
them by ensembles.
The module is designed for removal of high-amplitude noise bursts from the seismic traces.
1. The average of the absolute amplitudes over all samples of all traces within the flow is calculated.
2. Within a sliding window of several traces, alpha-trimmed average of absolute amplitudes is
calculated for each sample.
3. The average for the current sample within the window is compared with the average over the
flow. If the average for the sample over the window is too small (that is, does not exceed certain
percentage of the average over the flow), this sample is skipped as the module is not supposed to
change low amplitude samples.
4. Otherwise, the amplitude of this sample at the middle trace of the window is compared with the
average for this sample over the window. If the absolute amplitude exceed the average N times
and more, this sample at the middle trace is considered to be a burst and its amplitude is
substituted by the alpha-trimmed average for this sample over all traces within the current
position of the window.
Parameters
• Window size for average value calculation (traces) — number of traces of the sliding window
used for alpha-trimmed average calculation.
• Rejection percentage (%) for alpha-trimmed average calculation. This percentage of the highest
and lowest amplitudes will be rejected, the remaining will be averaged.
• Do not change amplitudes lower than (%) of the average — this is the threshold percent from
the average over the flow. When the alpha-trimmed average over the window is below this
threshold, this sample is skipped and guaranteed to remain unchanged.
• Modify values when exceed average in more than N times — set the N-parameter here. When
the absolute amplitude of the sample at the middle trace is N-times higher than the average for
this sample over the windows, it is substituted by the average.
F-K Amplitude Power
Module parameters are shown below:
Parameters
The module raises 2D FK or (if FX domain only checkbox is switched on) 1D FX amplitude spectrum
to a power with an arbitrary real Exponent. Phase spectrum remains unchanged. The result is
transformed back to the original TX domain.
2D spectrum operations are performed independantly for each position of a time-spatial sliding window.
The window widths and shifts at sliding are defined sepatately in Time dimention (Time window, ms
and Time shift) and Trace dimention (Trace window and Trace shift).
When FX domain only check is switched on, Trace dimention parameters do not affect the result – 1D
spectrum operations are performed in time-window sliding downwards trace by trace.
Expontent values greater than 1 for the 2D FK spectrum increase coherency of the input recrdings and
suppress random noise. In 1D FX domain only case, it results in a kind of frequency filtering, making
amplitude soectrum narrower.
Exponent values smaller than 1 in 1D case of FX domain only result in some kind of specrtral
whitening, making the amplitude spectrum broader and less sharp. We do not recommend using of
exponent smaller than 1 in FK domain.
Get by ensemble option, when switched on, prevents spatial sliding window from crossing boundaries
between ensembles. Thus, in this case, each ensemble will be processed separately.
Ensemble Equalization
Trace ensemble amplitude equalization within the specified window.
Parameters
• Start – window start (ms)
• End time – window end time
• Norm – normalization type
▪ RMS – relative to the RMS value within the window.
▪ Mean – relative to the mean value within the window.
▪ Max – relative to the maximum value within the window
F-X-Y Deconvolution (F-X-Y Predictive Filtering)
The FXY Deconvolution module is designed to suppress noise in a three-dimensional (3D) and two-
dimensional (2D) data using the Wiener frequency predictor filter in the FXY region. In fact, this module
is an extension of the FX deconvolution algorithm (predictive filtering in the FX field (see the description
of the FX Predictive Filtering module)) for three-dimensional cases. The predictive filtering algorithm
for 3D cases separates flat events (similar to linear events separated in 2D cases) against the noise
background.
• Data supplied to the module input must be in sortings: ILINE_NO: XLINE_NO or XLINE_NO:
ILINE_NO;
• Absence of empty traces in a data set (i.e., a non-zero path must exist for each pair of ILINE_NO
and XLINE_NO values). However, data may begin with any ILINE_NO and XLINE_NO value;
• In case of the Framed mode, provision must be made for a close fit of adjacent frames (i.e., there
should be no gaps of seismic traces in the data set).
Parameters
▪ Rate: the rate of adaptation of the filter, which determines the degree of changes in filter
coefficients at each step of the matrix with respect to the previous step. The smaller the
value, the more aggressive noise reduction (although a part of the signal energy can be
lost in this case). For large values, noise reduction will be softer (more noise will be
present). We recommend you to use the parameter value in the range from 0.5 to 2. It
should be noted that the adaptive algorithm is much faster than the exact one.
▪ Size (ILINE x XLINE): the size of the filter, which is specified in i-lines and x-lines.
It should be borne in mind that field values will be interpreted reversely in case of reverse sorting: an
inline will become an x-line and an x-line will become an i-line.
PLEASE NOTE: Only odd values of the filter size are permitted. Even values will be automatically
increased by 1.
When switching the filter type from Single-pass (single-pass, unrealizable) to Multi-pass (multi-pass,
realizable), the window size is automatically doubled (approximately) according to the following
formula: NR = NU × 2 + 1. If any of the window sizes is equal to 1, it will not change.
When switching the filter type from Multi-pass (multi-pass, realizable) to Single-pass (single-pass,
unrealizable), the window size will be automatically halved (approximately) according to the following
formula: NU = [NR / 4] × 2 + 1. If any of the window sizes is equal to 1 or to 3, it will not change.
Increasing the size during the transition to an realizable filter ensures that the sizes of 4 submatrices of
the semi-causal filter (for details, see the Semi-Causal (Realizable, Four-Pass) Filter section)
approximately match the sizes of the matrix of the non-causal filter and lead to a similar result at the
output.
• ILINE – XLINE window: the size of the sliding filtering window (rectangular data area). If
the input data are sorted as ILINE_NO:XLINE_NO, the first field will specify the number of
i-lines that enter the window at the same time, and the second field will specify the number of
x-lines that enter the window (i.e., traces of each i-line). For reverse sorting, the values in the
fields will be interpreted reversely.
NOTE: The ratio of the size of the filter and the spatial window plays an important role. If the
size of a filter is fixed, reducing the spatial window will result in decreased noise reduction but
will better preserve the useful signal of curved seismic events (in a small window, a curved
event is close to a flat one). Reversely, a large spatial window with a small filter may smooth
the data too much by removing both the noise and the useful signal from non-flat events.
• ILINE – XLINE windows overlap: overlapping of adjacent spatial windows in i-lines and x-
lines. (Interpretation of values in the fields also depends on the sorting of input data; in case of
reverse sorting, i-lines will become x-lines and vice versa).
NOTE: The overlapping value must not exceed half the width of the spatial window.
• Time window: if checked, data will be processed in time windows, the duration of which is set
on the right in milliseconds. Since the predictive filtration algorithm accentuates flat events at
the noise background, splitting traces into time windows can reduce the loss of the useful signal
in curved reflection areas even at a filter that is small as compared to the spatial window (the
curved border is divided into short fragments, each of which is close to flat).
Mute hard zeroes: if checked, the zero reference in output data will be at the same locations as in the
input data. This parameters is used to recover the muting.
System resources
• Memory Limit– the maximum amount of additional RAM (in gigabytes) for the operation of
the module. If the module requires more memory than indicated in this field, it will use a swap
file on your hard drive to record some housekeeping data. If set to 0, the module will use the
minimum amount of memory required and will use the swap file for all other data.
• Number of threads– the number of parallel processing threads (the quantity of logical
processor cores that are accessible for the operation of the module).
• Set maximum– when clicking this button, the value in the box with the quantity of threads will
be set to the number that corresponds to the number of logical cores that are available on the
computer.
Method Theory
Calculation of the Wiener complex filter kernel with an arbitrary "active" point
There is a rectangular area of data (a window) where each value is a complex number (the number is a
combination of elements of amplitude and phase spectra). It is assumed that additive noise is present in
the data. A matrix of the Wiener filter coefficients for the specified window must be constructed.
According to the article [Chase, 1992], the point of application of the filter is located in the center of
the filter matrix. Thus, data around this center point but not in this point are used to calculate the
filtered value. For example, a 3x3 filter will have the following matrix:
𝑤00 𝑤01 𝑤02
(𝑤10 0 𝑤12 )
𝑤20 𝑤21 𝑤22
To build a filter, you must go through the window only once; this is why the filter is also called a single-
pass filter.
NOTE: the window with the data must have a border with the width of half the size of the filter. In the
absence of data, the linear extrapolation from the rows / columns that are the nearest to the border will
be used.
Let us explain this transition as an example. Let us take the non-causal 3x3 filter, which was discussed
in the previous section:
𝑤00 𝑤01 𝑤02
(𝑤10 0 𝑤12 )
𝑤20 𝑤21 𝑤22
Let us make the filter become causal on, at least, one axis. To do this, mentally draw a vertical and a
horizontal line through the center of the matrix. These lines divide the matrix into 4 new matrices:
′′ ′′ ′′′ ′′
′ ′ ′ 𝑤00 𝑤01 ′′′ ′′ 𝑤01 𝑤02
𝑤00 𝑤01 𝑤02 ′′ 𝑤10 0 𝑤12 ′′′
( ′ ′ ), (𝑤10 0 ), ( ′′ ′ ), ( 0 𝑤12 ).
𝑤10 0 𝑤12 ′ ′ 𝑤20 𝑤′′21 𝑤22 ′′′ ′′
𝑤20 𝑤21 𝑤21 𝑤22
Bars above the elements show that, strictly speaking, the elements of the new 4 matrices differ from the
elements of the original matrix.
The first matrix refers to the vertical direct filter; the second, to the horizontal direct filter; the third, to
the vertical reverse filter; and the fourth, to the horizontal reverse filter.
Matrices of any sizes can be similarly constructed. However, the number of rows and columns in the
original non-causal matrix must be odd. The "active" points of 4 filters are located at the centers of
respective sides of the matrices.
Each of the four matrices is calculated in a single pass through the window with the data with the use of
a linear equation system. During filtering, the convolution is performed 4 times; then the results are
averaged with the weight of ¼.
Theory
The fold and especially offset distribution of the data can vary significantly in marine and land
surveys. This variation degrades velocity analysis, stacking and migration, leading to footprints visible
of the final data, which is often impossible to eliminate poststack. The goal of 3D regularization is to
make uniform distribution of offsets by interpolating offset bin volumes.
The goal of 3D regularization is the same as of flex binning. Flex binning works as simple nearest
neighbour interpolation, which is accurate only for horizontal seismic events. F-Kx-Ky reconstruction
is accurate for dipping events until dips are aliased in the data.
Procedure works in overlapping spatial/temporal blocks. For each block the procedure searches for
Fourier coefficient on regular grid that will match input irregularly spaced data through backward
Fourier transform. Regular Fourier coefficients then transferred back into X-Y-T domain through
regular Fourier transform.
Scheme described in [2] and implemented here as the core algorithm provides a well-founded
approach with clear assumptions based on which missing data are reconstructed. The tests show that
the result is free of footprint and interpolation artifacts. At the same time, the scheme has the following
weaknesses:
It is sometimes not obvious how to choose damping factor, which balances between the “amount of
sparseness” in the solution and the data fit.
In case of high contrast of signal in the same processing block (say, intensive water bottom interfering
with weak sub-bottom events) enforcing the sparseness of the solution leads to wiping out weak
signals.
References:
[1] Xu, S.,Y. Zhang, D. Pham, and G. Lambare, 2005, Antileakage Fourier transform for seismic data
regularization:Geophysics, 70, no. 4, V87–V95.
Usage
Recommendations
It is recommended to apply all the de-noising, which will not gain from 3D regularizing data
prior to running 3D regularization. The examples of such de-noising are all the 2D de-noising, SRME
demultiple.
The module has 2 basic modes of operation: with or without searching for sparse solution, which
is controlled by 'Search for sparse solution' flag. When this flag is off, the procedure will run quicker,
but it will not be able to handle under sampled data. This is the good choice if you need to regrid the
data from regular grid to some rotated grid keeping the same or smaller bin size, or join several vintages
(provided that each offset bin is “dense” in a sense that almost all CDP bins have at least one trace,
except for certain isolated bins). If this is not the case, sparse solution mode should be turned on. It has
several parameters for tuning the process of sparse inversion.
In almost all the cases on both marine and land data the nearest offset bin needs special treatment.
It is recommended to group first several offset bins into one for regularization to form the bin that will
have data through all the survey. The reason is that for typical marine and land survey the near offset is
available only near central streamer (shot/receiver line). Interpolating these nearest bins will introduce a
need to add a very aggressive “sparseness” to the data in these bins, which may negatively affect the
final stacked result. For marine survey, first bin size should typically be taken as 0-L offset range, where
L is the nearest offset on far most cable. For land orthogonal survey, typical first bin size should equal
one half of the diagonal of shot/receiver lines grid (for instance, if shot lines spacing is 400m, receiver
lines spacing is 300m, first bin size should be 0.5*sqrt(300^2 + 400^2) = 250m (the bin range will be 0-
250m).
Processing flow
Prior to regularization, the data should be NMO corrected or partially NMO corrected to bin
centers (centers of those bins used for regularization). After regularization NMO may be unapplied (the
interpolated traces will keep the offset header value from nearest original trace).
The data should be sorted in offset bin/inline/xline, or offset bin/xline/inline order. As an offset
bin header any user-defined header may be used. The sorting order is specified in the procedure
parameters.
Parameters
Headers/Block definitions
• Overlap in Y-direction
Total overlap between adjacent blocks in inline direction (expressed in fraction of Taper length
between blocks in Y direction). The overlap between blocks can be made bigger then the taper
zone, which is generally recommended to avoid spatial edge effects between processing blocks.
Default values for size of processing blocks (40), taper (10) and overlap (1.5) means the
following:
Overlap in X-direction
Total overlap between adjacent blocks in inline direction (expressed in fraction of Taper length between
blocks in X direction). See above for details.
3D Grid
Manual
Define 3D regularization grid manually by specifying grid origin, grid size and azimuth.
INLINE1,XLINE1:X1,Y1/INLINE2,XLINE2:X2,Y2/INLINE3,XLINE3:X3,Y3
Example:
1050,1150:480437.77,6729632.89/
1050,1900:498384.25,6724202.74/ 1825
1150:486048.95,6748177.69
The input data do not have any signal in the whole range of spatial frequencies for output grid. Setting
antialias grid spacing will reduce runtime and act as dip filter for dipping noise.
Antialias grid spacing does not affect the number and spacing of output regular data.
It is possible to setup both xline and inline antialias grid spacings or either of them, leaving 0.0 for the
other, thus providing antialias filter only in the one of spatial dimensions.
Minimum xline
Sparse solution
Selects the algorithm for F-Kx-Ky reconstruction.
If Search for sparse solution is off, the simpler algorithm that works for well spatially sampled data
is used. If Search for sparse solution is on, sparse solver is used.
'Basic' algorithm
The quicker algorithm may be used that assumes that the data is generally not undersampled in each
offset bin. If the data are actually undersampled in given spatial data block, the algorithm will reduce
the number of spatial frequencies in solution by throwing away high spatial frequencies (which is the
same as the setting antialias grid spacings, but this will be done adaptively in each spatial block). If the
empty bins are spaced nearly uniformly or randomly, the solution will be found but will ultimately filter
out high spatial frequencies. If the data has organized but nonuniform structure of empty bins, the
straightforward inversion of Fourier coefficient will be unstable.
When the block cannot be reconstructed the input data for the block are output unchanged. Header word
REGMARK is added to the trace headers, Value of 0 indicates that this is original trace output without
regularization. Each regularized trace has 1 in REGMARK header. Another header word, REGDIST,
contains the distance to the nearest original trace for interpolated trace (expressed in fractions of bin
dimension). This header word may be used to drop traces that are far from original.
It is recommended to use the basic algorithm only in case of strictly spatially oversampled data, or if
spatial sampling frequency of the data does not change in the process of regularization (for instance,
regridding of already regularly sampled data to a new grid).
Sparse algorithm
Sparse algorithm can handle nonuniform input data by finding the sparse solution in F-Kx-Ky domain
for each time frequency slice. The solution may not exactly fit input data, the noise and dim events may
be wiped out. Sparse F-K reconstruction may require tuning through the parameters described below.
By fine tuning the parameters it is possible to manage to some extent the balance between the 'sparseness'
and smoothness of solution and the fit to input data. The parameters work in conjunction with each other,
so one needs to play with them on a certain small test dataset first to fully understand the strategy of
tuning sparse inversion.
Stability factor has a theoretical meaning of relative noise variance. It is a general stabilizing factor
which also acts as a cut-off to wipe away dim dipping events from the sparse solution. Stability factor
acts as the balance between the “sparseness” of solution and the data fit (at each data fit iteration: the
final data fit may be perfect even with high stability factor if setting bigger number of data fit iterations).
Stability factor should be kept small, but not too small. If it is too big, the solution will be minimum
norm (not sparse), providing zeros away from input traces, while input data fit won't be good also (the
output will be smoothed). If stability factor is too small, the output may become extremely noisy, and
the sparseness of solution may not be achieved, Stability factor may also be thought as 'white noise level'
in deconvolution.
Sparseness parameter controls how dim events and noise will be attenuated in reconstructed result. 0
means no sparseness, the solution found for F-Kx-Ky Fourier coefficients in case of lack of input data
will be minimal norm solution. In terms of regularization result minimum-norm solution looks like near-
zero interpolated traces away of original trace positions, and nearly input traces on positions close to
input traces position. But if you see a picture like this, it does not necessarily mean that you should
increase Sparseness parameter. Typically it might be better to start with fixed Sparseness of 0.5-1, and
make the solution sparse with Sparseness stability and Stability Factor parameters, and then tune the
output in terms of keeping required amount of dim events through Sparseness parameters.
Normalize to quantile parameter gives more control over how sparseness is enforced in the solution.
Default value of 1 means that sparseness of elements of F-Kx-Ky spectrum will be enforced relative to
the maximum value in frequency slice. Specifying the other quantile value between 0 and 1 will select
the corresponding quantile (a point from spectrum values distribution) for normalization. This will have
the following effect: all the spatial frequencies with higher amplitude will be unaffected by sparseness
enforcing, the other frequencies will be attenuated relative to the smaller value. This parameter is
particularly useful in situations when there is a very strong event between the weaker ones in the same
temporal window (say, strong water bottom reflection and weak subbotom reflections). If you are getting
the situation when the strong reflections are wiped with higher Sparseness value and no sparseness is
enforced with the lower values, this parameter may be extremely helpful.
Sparseness stability has the theoretical meaning of relative signal variance. In practice it should be
typically kept small (smaller than Stability factor). Typically the good starting point would be to set it to
0,1 of Stability factor and roughly tune the sparseness of solution by Sparseness parameter. Then during
the finer tuning the data fit may be improved by decreasing Stability factor and increasing Sparseness
stability. Sparseness stability should never be zero to avoid divisions by zero.
Number of iterations and precision affect the performance: the more precision is, the slower will
procedure work. When increasing the number of data fit iterations parameter, it is typically possible
to increase the precisions (decrease the precision requirements) for sparse solver iterations without
affecting the final result.
Theory
Standard Fourier transform which is used during F-K filtering does not work correctly using irregular
data. Therefore, the approach described below is applied in the module.
The algorithm looks for such Fourier coefficients on a regular (!) grid, which would exactly fall into the
initial data at their inverse Fourier transform to an irregular grid. The problem in this formulation is
inverse and is solved by minimizing the residual functional. The first part of the functional requires that
the solution obtained falls into the initial data as accurately as possible , and the second part requires that
the solution meets the "minimum-impulse" condition which is called Sparsness in literature. This
condition means that the solution should describe the data using the minimum number of linear events
(large-scale and rare coefficients are preferred).
After the solution for the regular grid has been obtained, the slopes indicated by the user are cut out from
it. Further, the module transforms the data filtered in the spectral domain into the irregular grid into a
temporal domain.
General Options Tab
Parameters
Dimension panel is responsible for the input data dimension:
Offset headers panel indicates the headers in which the offset information is stored:
• X-offset header means that it is necessary to specify a header where the information about the
offsets in X-direction is stored.
• Y-offset header means that you need to specify a header where the offsets information is stored
in Y-direction. In case of 2D data, this parameter is not active.
Output domain is responsible for the result that will be output to the user:
• Filtered means that if this option is activated, the algorithm will select the events which are to
be removed from the initial data. In this case, if you activate Subtract filtered data, the events
will be subtracted from the initial data.
Frequency range panel defines the time frequencies of a signal which are taken into account by the
algorithm when searching for the solution:
You can determine these parameters when analyzing the spectrum of the input dataset.
Spatial frequency panel specifies the regular grid step for which the solution is searched (see Theory
section).
• Antialias X-offset step specifies the step of grid for which the solution is searched in X-
direction.
• Antialias Y-offset step specifies the step of grid for which the solution is searched in Y-
direction. In case of 2D data, this parameter is not active.
In case there is aliasing in the data, parameter value of Antialias X-offset step and Antialias Y-offset
step should be set to a smaller value than the distance between adjacent traces in the original dataset. It
will increase the Nyquist frequency and remove aliasing from the data.
Transform Grid panel also defines a regular grid for which the values of Fourier coefficients are
searched.
• Start X-offset value and Start Y-offset value parameters specify the initial offset value for
the regular grid. In this regard, determine the minimum offset available in the input data set and
enter this value in the parameter field.
• Number of X-offsets and Number of Y-offsets specify the number of cells for a regular grid in
each direction. It is necessary that the offset values of the last cells exceed the largest offset
values in the input data.
Example:
Let's suppose that there is a 2D dataset in which the minimum offset is 18 m and the maximum one is
1200 m. The step between the traces is 6 meters.
18 m
Number of X-offsets
Length of overlap between adjacent blocks in the inline direction. Expressed in the same units as
Taper length between blocks in X-direction.
Example:
Default parameters for 40-line blocks processed, averaging of 10 lines, overlap 1.5. These parameters
mean the following:
As already mentioned, the procedure for calculating the two-dimensional spectrum for irregular data is
incorrect. Therefore, if the data is irregularly distributed, you must activate Search for sparse solution
parameter.
When filtering with the Sparse F-K filter module, the F-K reconstruction procedure is performed at an
intermediate stage, which implies finding a solution on a regular grid that would best fit into the irregular
data after the reverse Fourier transform. The procedure of F-K reconstruction requires careful adjustment
of the parameters which are in Parameters for sparse solver tab. By adjusting the parameters, it
becomes possible to obtain a balance between the solution's sparsness and convergence with the data.
ATTENTION! The parameters are closely connected with each other, so first you need to study their
effect using a small dataset in order to fully understand the strategy for setting up the inversion
parameters
• Stability factor has a theoretical meaning of relative noise variance. It is a general stabilizing
factor which also acts as a cut-off to wipe away dim dipping events from the sparse solution.
Stability factor acts as the balance between the “sparseness” of solution and the data fit (at each
data fit iteration: the final data fit may be perfect even with high stability factor if setting bigger
number of data fit iterations).Stability factor should be kept small, but not too small. If it is too
big, the solution will be minimum norm (not sparse), providing zeros away from input traces,
while input data fit won't be good also (the output will be smoothed). If stability factor is too
small, the output may become extremely noisy, and the sparseness of solution may not be
achieved, Stability factor may also be thought as 'white noise level' in deconvolution.
• Sparseness parameter controls how dim events and noise will be attenuated in reconstructed
result. 0 means no sparseness, the solution found for F-Kx-Ky Fourier coefficients in case of lack
of input data will be minimal norm solution. In terms of regularization result minimum-norm
solution looks like near-zero interpolated traces away of original trace positions, and nearly input
traces on positions close to input traces position. But if you see a picture like this, it does not
necessarily mean that you should increase Sparseness parameter. Typically it might be better to
start with fixed Sparseness of 0.5-1, and make the solution sparse with Sparseness stability and
Stability Factor parameters, and then tune the output in terms of keeping required amount of dim
events through Sparseness parameters.
• Sparseness stability has the theoretical meaning of relative signal variance. In practice it should
be typically kept small (smaller than Stability factor). Typically the good starting point would be
to set it to 0,1 of Stability factor and roughly tune the sparseness of solution by Sparseness
parameter. Then during the finer tuning the data fit may be improved by decreasing Stability
factor and increasing Sparseness stability. Sparseness stability should never be zero to avoid
divisions by zero.
• Number of data fit iterations Enforcing sparseness may lead to misfit between the interpolated
output and original traces. Sometimes (on the very noisy data) this may be a desirable effect, as
the misfit may be completely noise, and finding sparse solution that is not keeping the original
noise is what one may actually want as a result. If this is not the case, and one observes a loss of
weak signal events, the data fit is improved by adding more data fit iterations (see Theory for the
details of the process).
Antialiasing panel specifies the parameters which are responsible for dealing with aliasing in the data.
In this regard, activate the parameter Perform antialiasing :
• Minimum apparent velocity to consider for positive slopes (m/s) и Minimum apparent
velocity to consider for negative slopes (m/s), these parameters are responsible for the
searching by the algorithm for the solution in the user-defined sector (in F-K domain)
In order to specify a range of frequencies for the algorithm which are not subject to aliasing, you must
activate Search for dips:
• Minimum frequency for dips search (Hz) is responsible for the minimum frequency in the
spectrum which is not subject to aliasing.
• Maximum frequency for dips search (Hz) is responsible for the maximum frequency in the
spectrum which is not subject to aliasing.
Example
Fan filter parameters panel is responsible for the frequency range which is filtered by F-K filter. The
range is indicated in the format shown in the figure.
• Reject means to mute the specified range of frequencies from the initial data.
• Pass means to leave the frequencies falling within the filter range and mute all other frequencies.
• K-filter (dx) means the distance corresponding to the spatial frequency to be cut out (X
direction). In meters.
• K-filter (dx) means the distance corresponding to the spatial frequency to be cut out. In meters
(Y-direction).
Sparse F-K Interpolation
The module is designed for interpolating the irregular data to a regular grid specified by the user. The
gaps in the data and, in particular, irregular distribution of the offsets in the acquired data seriously affect
the marine and land data processing. These factors worsen the velocity analysis, stacking and migration,
lead to the footprints which appear in the final data and which cannot almost be suppressed after stacking.
The main task of the module is the interpolation of irregularly distributed traces of common offset
gathers to a regular grid.
Parameters
Dimension panel is responsible for the input data dimension:
Offset headers panel indicates the headers in which the offset information is stored:
• X-offset header means that it is necessary to specify a header where the information about the
offsets in the input dataset in X-direction is stored.
• Y-offset header means that it is necessary to specify a header where the information about the
offsets in the input dataset in Y-direction is stored. In case of 2D data, this parameter is not
active.
Frequency range panel defines the time frequencies of a signal which are taken into account by the
algorithm when searching for the solution:
You can determine these parameters when analyzing the spectrum of the input dataset
Spatial frequency panel specifies the regular grid step for which the solution is searched (see Theory
section).
• Antialias X-offset step specifies the step of grid for which the solution is searched in X-
direction.
• Antialias Y-offset step specifies the step of grid for which the solution is searched in Y-
direction. In case of 2D data, this parameter is not active.
In case there is aliasing in the data, parameter value of Antialias X-offset step and Antialias Y-offset
step should be set to a smaller value than the distance between adjacent traces in the original dataset. It
will increase the Nyquist frequency and remove aliasing from the data.
Transform Grid panel also defines a regular grid for which the values of Fourier coefficients are
searched.
• Start X-offset value and Start Y-offset value parameters mean that you must specify here the
initial offset value for the regular grid. In this regard, determine the minimum offset available in
the input data set and enter this value in the parameter field.
• Number of X-offsets and Number of Y-offsets means that the number of cells for a regular
grid in each direction is indicated here. It is necessary that the offset values of the last cells
exceed the largest offset values in the input data.
Output Grid step panel is responsible for the size of the output grid using which the solution will be
output:
As already mentioned, the procedure for calculating the two-dimensional spectrum for irregular data is
incorrect. Therefore, if the data are irregular, you must activate Search for sparse solution parameter.
When Sparse F-K interpolation performs, the F-K reconstruction procedure is performed at an
intermediate stage, which implies finding a solution on a regular grid that would best fit into the irregular
data after the reverse Fourier transform.
The procedure of F-K reconstruction requires careful adjustment of the parameters which are in
Parameters for sparse solver panel. By adjusting the parameters, it becomes possible to obtain a
balance between the solution's sparsness and convergence with the data.
ATTENTION! The parameters are closely connected with each other, so first you need to study their
effect using a small dataset in order to fully understand the strategy for setting up the inversion
parameters
The main parameter for tuning the sparse reconstruction is Sparseness, which controls how sparse the
solution will tend to be. Typical range for this parameter might be 0.5-2.0.
• Stability factor has a theoretical meaning of relative noise variance. It is a general stabilizing
factor which also acts as a cut-off to wipe away dim dipping events from the sparse solution.
Stability factor acts as the balance between the “sparseness” of solution and the data fit (at each
data fit iteration: the final data fit may be perfect even with high stability factor if setting bigger
number of data fit iterations). Stability factor should be kept small, but not too small. If it is too
big, the solution will be minimum norm (not sparse), providing zeros away from input traces,
while input data fit won't be good also (the output will be smoothed). If stability factor is too
small, the output may become extremely noisy, and the sparseness of solution may not be
achieved, Stability factor may also be thought as 'white noise level' in deconvolution.
• Sparseness parameter controls how dim events and noise will be attenuated in reconstructed
result. 0 means no sparseness, the solution found for F-Kx-Ky Fourier coefficients in case of lack
of input data will be minimal norm solution. In terms of regularization result minimum-norm
solution looks like near-zero interpolated traces away of original trace positions, and nearly input
traces on positions close to input traces position. But if you see a picture like this, it does not
necessarily mean that you should increase Sparseness parameter. Typically it might be better to
start with fixed Sparseness of 0.5-1, and make the solution sparse with Sparseness stability and
Stability Factor parameters, and then tune the output in terms of keeping required amount of dim
events through Sparseness parameters.
• Sparseness stability has the theoretical meaning of relative signal variance. In practice it should
be typically kept small (smaller than Stability factor). Typically the good starting point would be
to set it to 0,1 of Stability factor and roughly tune the sparseness of solution by Sparseness
parameter. Then during the finer tuning the data fit may be improved by decreasing Stability
factor and increasing Sparseness stability. Sparseness stability should never be zero to avoid
divisions by zero.
• Number of data fit iterations. Enforcing sparseness may lead to misfit between the interpolated
output and original traces. Sometimes (on the very noisy data) this may be a desirable effect, as
the misfit may be completely noise, and finding sparse solution that is not keeping the original
noise is what one may actually want as a result. If this is not the case, and one observes a loss of
weak signal events, the data fit is improved by adding more data fit iterations (see Theory for the
details of the process).
Antialiasing panel specifies the parameters which are responsible for dealing with aliasing in the data.
In this regard, activate the parameter Perform antialiasing :
• Minimum apparent velocity to consider for positive slopes (m/s) and Minimum apparent
velocity to consider for negative slopes (m/s), these parameters are responsible for the
searching by the algorithm for the solution in the user-defined sector (in F-K domain)
In order to specify a range of frequencies for the algorithm which are not subject to aliasing, you must
activate Search for dips:
• Minimum frequency for dips search (Hz) is responsible for the minimum frequency in the
spectrum which is not subject to aliasing.
• Maximum frequency for dips search (Hz) is responsible for the maximum frequency in the
spectrum which is not subject to aliasing.
Filters Tab
Fan filter parameters panel is responsible for the frequency range which is filtered by F-K filter. The
range is indicated in the format shown in the figure.
• Reject means to mute the specified range of frequencies from the initial data.
• Pass means to leave the frequencies falling within the filter range and mute all other frequencies.
• K-filter (dx) means the distance corresponding to the spatial frequency to be cut out (X
direction). In meters.
• K-filter (dx) means the distance corresponding to the spatial frequency to be cut out. In meters
(Y-direction).
Sparse Radon filter
The module is designed to separate the signal and noise into the τ-p domain using a linear/parabolic
Radon filter. For these purposes, the High-Resolution Radon modification is used, which allows to
obtain a spectral picture with a fine resolution. The application of this technique makes it possible to
separate seismic events in the τ-p domain with greater accuracy, work with data having a limited aperture
and gaps in the data, and also effectively suppress multiple waves.
ATTENTION! The module may require to create additional headers in the database for its operation. If
the module has failed, check if the required by the Log headers are available in the database of the current
project. These headers are ITRCID, IOFFX and IOFFY by default.
Theory
The main problems of the standard Radon transform are the low resolution of different seismic events
in the τ-p domain, as well as the inverse transform instability associated with the spatial sampling
frequency and limited aperture of the input data.
An algorithm is implemented in the module which uses the condition of minimum impulse of the solution
(defined by the term "Sparsness" in the English-language literatures) in order to solve these problems.
The algorithm looks for such a solution in the τ-p domain, which, with the inverse Radon transform, has
a minimal difference with the initial data (the algorithm interpolates them in the places of data gaps).
Besides, the solution should contain the minimum possible number of events in the τ-p domain. It is the
latter condition that makes it possible to obtain the required resolution in the Radon domain.
Module parameters:
Dimension panel is responsible for the input data dimension:
Offset headers panel indicates the headers in which the offset information is stored:
• X-offset header means that it is necessary to specify a header where the information about the
offsets in the input dataset in X-direction is stored.
• Y-offset header means that it is necessary to specify a header where the information about the
offsets in the input dataset in Y-direction is stored. In case of 2D data, this parameter is not
active.
Output domain area is responsible for the result that will be output to the user:
• Filtered means that if this option is activated, the algorithm will select the events which are to
be removed from the initial data. In this case, if you activate Subtract filtered data, the events
will be subtracted from the initial data in the temporal domain.
• Transformed means that the algorithm calculates the solution using a regular grid, the data are
further filtered out in the spectral domain and transformed to an irregular grid.
Frequency range panel defines the time frequencies of a signal which are taken into account by the
algorithm when searching for the solution:
You can determine these parameters when analyzing the spectrum of the input dataset
Spatial frequency panel specifies the regular grid step for which the solution is searched (see Theory
section).
• Antialias X-offset step specifies the step of grid for which the solution is searched in X-
direction.
• Antialias Y-offset step specifies the step of grid for which the solution is searched in Y-
direction. In case of 2D data, this parameter is not active.
In case there is aliasing in the data, parameter value of Antialias X-offset step and Antialias Y-offset
step should be set to a smaller value than the distance between adjacent traces in the original dataset. It
will increase the Nyquist frequency and remove aliasing from the data.
Block element size panel, this parameter is responsible for the configuration of the grid for which the
solution will be searched in the Radon domain.
Radon transform options panel is responsible for the Radon transform parameters. A diagram
illustrating the geometric meaning of each parameter is shown below:
• Linear/Parabolic means that it is necessary to select the type of Radon transform which will be
applied to the data. There are 2 types implemented in the module: linear and parabolic Radon
transform.
• Reference offset means an offset for which Minimum moveout at reference offset (ms) and
Maximum moveout at reference offset (ms) parameters are defined. These parameters
determine an area in the seismic gather containing the events to be filtered.
• Minimum moveout at reference offset (ms) and Maximum moveout at reference offset (ms)
mean time boundaries which specify the width of the time interval for the Radon filter. (see the
figure).
• Number of p-values (X direction), this parameter specifies the step between linear and
parabolic events in the area bounded by the parameters Max / Min moveout at reference offset.
Example:
Default parameters for 40-line blocks processed, averaging of 10 lines, overlap 1.5. These parameters
mean the following:
The procedure of the minimum impulse reconstruction requires careful adjustment of the parameters
which are in Parameters for sparse solver tab. By adjusting the parameters, it becomes possible to
obtain a balance between the solution's sparsness and convergence with the data.
ATTENTION! The parameters are closely connected with each other, so first you need to study their
effect using a small dataset in order to fully understand the strategy for setting up the inversion
parameters
The main parameter for tuning the sparse reconstruction is Sparseness, which controls how sparse the
solution will tend to be. Typical range for this parameter might be 0.5-2.0.
• Stability factor has a theoretical meaning of relative noise variance. It is a general stabilizing
factor which also acts as a cut-off to wipe away dim dipping events from the sparse solution.
Stability factor acts as the balance between the “sparseness” of solution and the data fit (at each
data fit iteration: the final data fit may be perfect even with high stability factor if setting bigger
number of data fit iterations).Stability factor should be kept small, but not too small. If it is too
big, the solution will be minimum norm (not sparse), providing zeros away from input traces,
while input data fit won't be good also (the output will be smoothed). If stability factor is too
small, the output may become extremely noisy, and the sparseness of solution may not be
achieved, Stability factor may also be thought as 'white noise level' in deconvolution.
• Sparseness parameter controls how dim events and noise will be attenuated in reconstructed
result. 0 means no sparseness, the solution found for F-Kx-Ky Fourier coefficients in case of lack
of input data will be minimal norm solution. In terms of regularization result minimum-norm
solution looks like near-zero interpolated traces away of original trace positions, and nearly input
traces on positions close to input traces position. But if you see a picture like this, it does not
necessarily mean that you should increase Sparseness parameter. Typically it might be better to
start with fixed Sparseness of 0.5-1, and make the solution sparse with Sparseness stability and
Stability Factor parameters, and then tune the output in terms of keeping required amount of dim
events through Sparseness parameters.
• Sparseness stability has the theoretical meaning of relative signal variance. In practice it should
be typically kept small (smaller than Stability factor). Typically the good starting point would be
to set it to 0,1 of Stability factor and roughly tune the sparseness of solution by Sparseness
parameter. Then during the finer tuning the data fit may be improved by decreasing Stability
factor and increasing Sparseness stability. Sparseness stability should never be zero to avoid
divisions by zero.
• Number of data fit iterations. Enforcing sparseness may lead to misfit between the interpolated
output and original traces. Sometimes (on the very noisy data) this may be a desirable effect, as
the misfit may be completely noise, and finding sparse solution that is not keeping the original
noise is what one may actually want as a result. If this is not the case, and one observes a loss of
weak signal events, the data fit is improved by adding more data fit iterations (see Theory for the
details of the process).
Muting Tab:
The tab Parameters allow you to get an image in the Radon domain and manually select the area to be
filtered.
Type:
• No mute means that the module operates without manual muting of the events in the Radon
domain.
• Top means to mute the events above the picking.
• Bottom means to mute the events below the picking.
Algorithm:
1. Activate the Transform mode in the General option tab
2. Set the type of mute (Top/Bottom)
3. Run the module.
4. A display appears in the Radon domain in the Screen Display window. There is slowness on
the horizontal axis and time is on the vertical axis.
5. Select the area for the mute by picking and save the resulting pick in the database.
6. Close the Screen Display window and load the picking in the Muting tab.
7. Run the module again
Time Variant F-K Filter
The module performs two-dimensional filtering by windows. The trace is divided into time intervals, for
each of which a different pass or cut area is specified.
For theoretical information and how to set the area (polygon or fan) of two-dimensional filtering, see the
relevant sections of the F-K Filter module.
ATTENTION! For correct operation of the module, it is necessary that the distances between the traces
along the X axis be approximately the same.
Module parameters
Window for operating with the Time Variant F-K filter module
Horizons (ms) – the list of boundaries separating the windows to be filtered. The boundaries of the
windows are added/removed with the Append item ( ), Remove ( ) buttons, respectively, and the
Reset ( ) button allows getting back to the default settings.
• Picking from the header - press the H button to select the header
It is possible to see an example of setting the windows in the figure below. In this case the two pickings
selected in the Horizons field divide the section into 3 parts where the 2D filtering will be performed:
1) above the red picking
2) between pickings
3) below the yellow picking
An example of setting windows in the seismic section
ATTENTION!
1. The horizons must not intersect, otherwise the program behavior will be unpredictable.
2. It is recommended to use the Horizon Manipulation module when preparing the pickings of
horizons, which makes it possible to correctly interpolate in space the horizons picked at separate
pickets.
The process of smoothing the results between the two windows takes place in the interval set using the
Taper between time windows, [ms] parameter. The interval is counted up and down from the boundary
(horizon). Within the interval, smoothing is performed by overlapping the windows, taking into account
the weighting coefficients to make the transition between data sets visually less noticeable. The
amplitude of each window is multiplied by the decreasing linear function and further summed up within
a given interval. (See figure below)
Smoothing of the filtering results within a given interval. The amplitudes of the first window traces are
multiplied by the decreasing linear function within the "horizon-lower boundary of the interval" interval and
summed with the amplitudes of the second window traces. This process takes place in the same way within the
"upper boundary of the interval - horizon" limits.
Parameters of the "Parameters for windows", "Distance between traces", "Use ensembles", "Taper
window width", "Number of threads" sections are similar to those presented in the F-K filter
module (see Module parameters for details).
Structural Smoothing
This module estimates the dominant dip in the seismic data and smooths the data along this dip. It can
estimate the dips and conduct smoothing in one pass, or just estimate and output the dips. The module
can be used to solve various tasks such as noise removal or diffraction separation. There is an option
for smoothing non-seismic data (such as velocities) along the dips estimated from seismic.
NOTE: It is recommended to perform structural smoothing on full ensembles (e.g., full common-
offset gathers, full stacks, etc.) to minimize edge effects. To do this, one needs to use proper sorting
(see below) and set Process by ensembles to Yes (1). In Framed mode, Honor ensemble boundaries
option is recommended.
Dip source – specifies the source of dips to be used for smoothing. There are three options:
• Calculate from input – calculates the dips from the input dataset.
• Read from external dataset – uses the dips from external dataset for smoothing. This is useful
when one needs to preprocess the computed dips before performing the smoothing. To do this,
one needs to compute the dips with Structural Smoothing and output them into a separate
dataset by selecting Output dips – Yes (1) and Output smoothed data – No (0). Then, any
preprocessing (such as clipping and smoothing) can be applied to the dips dataset in RadexPro
or by external tools. Finally, the preprocessed dips dataset can be set in Specify external dips
source. It is assumed that the sorting of the input dataset and the dips dataset is the same.
NOTE: In this case the traces in the dataset with dips need to have TRC_TYPE = 374. The
module writes 374 to the TRC_TYPE header when outputting dips – so, when dips are being
preprocessed, this header value needs to be preserved (or re-set before applying Structural
Smoothing).
• Calculate using external seismic dataset – uses an external seismic dataset to estimate the dips.
This allows for smoothing non-seismic data guided by seismic dips. For example, if one needs
to smooth the migration velocity along the dip, one may to select this option, input the velocity
into the module from the processing flow, specify the seismic dataset for dip computation in
Specify external dips source and run the flow. It is assumed that the sorting of the input dataset
and the dataset for dip computation is the same.
Dip estimation trace window width, [traces] – the size of the window used for dip estimation along
trace axis. Larger windows result in smoother dips.
Dip estimation time window width, [ms] – the size of the window used for dip estimation along time
axis. Larger windows result in smoother dips.
Specify external dips source – the dataset which contains the dips if Dip source is set to Read from
external dataset or the dataset which is used to compute the dips if Dip source is set to Calculate
using external seismic dataset (not active if Dip source is set to Calculate from input).
Output filtered data – a switch for filtered data output. If set to Yes (1), the module outputs filtered
data. If set to No (0), filtered data is not computed.
Output type – a switch for the type of filtered data. If set to Smoothed data, the module outputs the
data smoothed along the dips. If set to Difference, the module outputs the difference between the
original data and the data smoothed along the dips. The Difference mode is useful for tasks such as
diffraction imaging, when the smoothed data is represented by reflections estimated by smoothing
along the dip, and the difference between the original data and the reflections outputs diffractions.
Structural smoothing trace window width, [traces] – the width of the window used for structural
smoothing. Larger windows result in smoother processed data.
Output dips – – a switch for the dips’ output. If set to Yes (1), the module outputs the dips. If set to
No (0), the dips are not output. If both Output dips and Output smoothed data are set to Yes (1), the
output dataset contains the interleaved smoothed traces and dip traces. Dip traces are marked with
TRC_TYPE = 374 for further discrimination from the data. The dips are measured in samples per
trace.
Interpolation algorithm – a trace amplitude interpolation algorithm used in the estimation of the
filtered data. Sinc provides higher accuracy and Linear is more computationally efficient. For most
cases, Sinc mode is recommended.
Process by ensembles – a switch for processing ensembles separately. Setting it to Yes (1) prevents
the dip estimation and smoothing windows from crossing the ensemble boundaries, thus removing the
edge effects.
Number of threads – sets the number of threads for parallelization (only active when Process by
ensembles set to Yes (1)). Note that the parallel computation occurs by ensemble, which means that it
works only when the frame contains several ensembles or when the framed mode is turned off and the
input data stream contains several ensembles. It also means that the number of threads does not need to
be larger than the number of ensembles in one frame or than the number of available CPU cores. For
optimal speedup, the number of ensembles in one frame needs to be a multiple of the number of
threads.
Theory
This module conducts the estimation of dominant dips in the seismic data and consequent smoothing
along these dips using the plane-wave destruction filters. The theory of these filters is presented by
Claerbout (1992) and Fomel (2002).
References
Claerbout, J. F. (1992). Earth soundings analysis: Processing versus inversion (Vol. 6). London:
Blackwell Scientific Publications.
This module conducts the removal of noise caused by blended acquisition of seismic data using an
iterative inversion-based technique.
• Blended acquisition usually results in continuous recordings of seismic data. The Deblending
module, however, needs the input data to be sliced according to its source excitation time (in the
literature, such sliced gathers are called ‘pseudodeblended’ data). The slicing can be performed
using the Data Slicer module in RadExPro or by a similar tool in an external software.
• Each seismic gather input to this module needs to be recorded by one specific receiver during
blended acquisition. Thus, for marine acquisition with towed streamers, the suggested sorting is
common-channel (CHAN:SOU_SLOC for 2D acquisition, number of ensemble fields in Trace
Input = 1, and S_LINE:CHAN:SOU_SLOC for 3D acquisition, number of ensemble fields in
Trace Input = 2). For onshore seismic or marine seismic with sea bottom receivers, 2D or 3D
common-receiver gathers should be used (R_LINE :REC_SLOC:S_LINE:SOU_SLOC for
3D). It is assumed that in such sorting the signal appears as ‘smooth’, while the blending noise
looks like random bursts (this is usually true when field acquisition involves ‘dithered’ shots,
i.e., when the time intervals between shots are unstable).
• The input data needs to have the shot time headers filled in: YEAR, DAY, HOUR, MINUTE,
SECOND, MS. The last MS header contains the milliseconds of the shot time (the type of this
header is real), possibly down to microsecond accuracy. In the module parameters, one can
specify a different header containing milliseconds instead of MS in the Milliseconds header
(Real8). The headers need to contain the very same time which was used for slicing, the accuracy
of the used shot time needs to be as high as possible. Alternatively, if Use all time headers is
set to No, the module only reads time in milliseconds from Milliseconds header (Real8) from
any epoch which started before the start of the survey.
• For the flip-flop and general 3D cases, more complex sorting needs to be performed to provide
the smoothness of signal and randomness of blending noise. If flip-flop sources are used with
towed streamers, additional separation of the input data into sub-ensembles using the GUN_ID
header needs to be conducted: CHAN:GUN_ID:SOU_SLOC for 2D (number of ensemble
fields is 1) and S_LINE:CHAN:GUN_ID:SOU_SLOC (number of ensemble fields is 2) for 3D
acquisition. If such a sorting is used, a specific ‘header for sub-ensemble separation’ module
parameter needs to be specified. For flip-flop marine acquisition, one needs to set it to GUN_ID.
For 3D common-receiver gathers in R_LINE :REC_SLOC:S_LINE:SOU_SLOC sorting with
number of ensemble fields = 2, one needs to provide S_LINE in this parameter field.
• Each ensemble needs to have a trace corresponding to each shot fired during the acquisition of
this ensemble. The whole ensemble is processed at once, so no preprocessing/muting/trace
removal can be conducted before deblending. If there are two sources acquiring a sail line in flip-
flop mode – both flip and flop traces need to be in one ensemble (using the sorting specified
above). If two sources are simultaneously acquiring a 3D common receiver gather – each shot of
these two sources needs to have a dedicated trace in the input ensemble. If any source excitation
is present on the gather as blending noise, it needs to have a dedicated trace where the same
source excitation acts as signal. This can further be explained using an example of an ocean
bottom survey. If there is only one source acquiring common-receiver gathers in a blended
manner for a 3D survey – we can input each shot line as a separate ensemble to save
computational resources, as different shot lines do not overlap with each other in time (i.e., we
can use sorting R_LINE:REC_SLOC:S_LINE:SOU_SLOC with number of ensemble fields
= 3). If there are two independent sources acquiring different shot lines for one common-receiver
gather at the same time, the whole common-receiver gather needs to be provided in one ensemble
(i.e., sorting R_LINE :REC_SLOC:S_LINE:SOU_SLOC with number of ensemble fields =
2 and S_LINE provided as the ‘header for sub-ensemble separation’).
• Trace length obtained from the data slicing influences the processing results directly, as the
algorithm is only able to remove the noise in the parts of the data where the provided gathers
overlap. In some cases, increasing the trace length at the slicing stage may improve the
deblending results.
General instructions
This module contains a certain number of coherence-pass filters (F-K Filter, F-X Predictive Filter,
Median Filter/TFD), which are applied to the data in an iterative manner. It is advised to test these
filters (all of them except the median are present as separate modules in RadExPro) and start
deblending with a parameter set for these filters which suppresses the blending noise to some extent
but does not touch the signal. It is fine if the application of the 2D filters to the high amplitude
blending noise generates some artefacts, the iterative scheme attenuates these artefacts. For a well-
sampled dataset, using a median filter/TFD together with an F-K filter may be enough. For F-X
Predictive Filtering, we found that using rather small time windows of 100-200 ms is useful for signal
preservation.
Parameters
Threshold value – a value used for data thresholding during inversion. This value is internally scaled
relative to the maximum value of the data, so the user sets the threshold in the range (0, 1). At each
iteration, the threshold is equal to hi, where h is the value supplied by the user and i is the iteration
number.
Thresholding domain – the domain where the thresholding takes place. Two options are available,
Time and F-K. Generally, for well-sampled non-aliased data, F-K domain inversion provides better
noise removal results. If the spatial sampling of the data is poor, time-domain inversion can still
provide a meaningful result.
Use blending mask – a switch which constrains the deblending process by a blending mask. If this
switch is turned on, the algorithm uses the shot times to identify the samples in the data where there is
no shot overlap. These samples are then left untouched during processing, similar to the work of Zhou
(2017). This option allows one to limit the coherent filters’ application to the blended parts of the data,
which preserves the signal in the area where there is no shot overlap. Works best for towed-streamer
surveys, where this allows one to preserve a significant part of the seismic gather after the blended
noise from previous shot and before the blending noise from the next shot.
Combined median-F-K filter (only active if thresholding domain is Time) – a switch for nonlinear
combination of median and F-K filtering. If this option is turned on, then, instead of sequential
application of median and F-K filters, these filters are applied to the input data separately and their
results are combined in a nonlinear manner as suggested by Mahdad (2012).
Number of iterations – number of inversion iterations. It is recommended to set Threshold value and
Number of iterations together so that the threshold at the last iteration hn, where n is the number of
iterations, is small enough. We observed that hn ~ 10-5 (e.g., provided by the parameters h = 0.6 and n
= 20) is suitable for F-K-domain deblending. For time domain, a lower value in the range 10-7 and 10-6
is suggested. To increase the processing speed, one can decrease the number of iterations while
increasing the threshold proportionally so that hn stays approximately the same, while also controlling
the quality of the processing result.
Shot blending factor – equal to the average number of shot excitations in one trace length. This is
often present in the survey documentation or can be estimated by looking at the sliced shot gathers and
counting the number of source activations on one gather. Using too low shot blending factor can lead
to instabilities in the inversion process.
Use all time headers – a switch which sets the input time format.
• If set to Yes, headers YEAR, DAY, HOUR, MINUTE, SECOND, MS are used for shot time
input. If a header different from MS is set in Milliseconds header (Real8) parameter field – this
different header will be used.
• If set to No, the module only reads the time header set by the Milliseconds header (Real8)
parameter, which is expected to contain Unix time in milliseconds, or another time standard also
converted to milliseconds (UTC, GPS, etc.).
Milliseconds header (Real8) – header containing the milliseconds of the shot time (note that this
needs to be a ‘Real’ type header).
Header for separation within ensemble – header used to separate the input ensemble into sub-
ensembles. This can be applied to make sure the flip and flop common-channel gathers undergo the
coherent filtering and thresholding operations separately (most commonly this is set to GUN_ID,
however different headers can be used for non-towed streamer surveys).
Number of threads – sets the number of threads for parallelization. Note that the parallel computation
occurs by ensemble, which means that it works only when the frame contains a few ensembles. It also
means that the number of threads does not need to be larger than the number of ensembles in one
frame. For optimal speedup, the number of ensembles in one frame needs to be a multiple of the
number of threads. Note that the available RAM limits the number of ensembles one can have in a
frame.
• Use F-K Filtering – a switch to turn on or off the F-K Filtering inside the deblending loop.
• Minimum velocity to preserve, [m/s] – the minimum velocity which the F-K filter keeps in the
data. This module contains simplified parameter setting for the F-K filter. The F-K filter only
uses symmetric fan filters which suppress all the events with the velocity lower than Vmin, both
positive and negative wavenumbers, for all frequencies from 0 to Nyquist. It is advised to set this
velocity so that the signal on the input gathers is not damaged. The default value of 1300 m/s is
suggested as a good starting point for marine surveys, as it keeps the water layer velocity of
~1500 m/s and all the faster events. For high-frequency seismic sources, it can still be useful to
decrease the value of this parameter from default. Sometimes the default 1300 m/s value can
suppress the variation of the signal on common-offset gathers which is happening due to
variation of source and receiver relative positions during towing. We were able to obtain good-
quality results with values as low as 500 m/s.
• Taper window width, [%] – width of the taper zone for the F-K filter.
• Distance between traces, [m] – physical distance between the input data traces, which is used
by the F-K filter.
• Filter type – type of median filter to be used, Simple for simple median filter with a threshold
and TFD for windowed frequency-domain median filter implemented in the TFD Noise
Attenuation module. There is also an option to turn the median filter Off.
• Turn off the median filter for last N iterations – the number of iterations at the end of the
inversion where the median filter is turned off. At every iteration, the algorithm used in the
module adds the difference between the current solution and the original data to the current
solution. So, if the median filter used is too harsh on the signal, one can turn it off for several
iterations at the end so that the signal removed by the median filter is added back, when the
deblending noise is already removed (sometimes this comes at a cost of a part of the noise coming
back, however). It is suggested to try using this trick for F-K domain inversion. The number of
iterations needed to bring the suppressed signal back depends on the blending factor. For the
blending factor of 2, turning off the median filter for 5 iterations brings back more than 95% of
the suppressed signal. For higher blending factors, more iterations are needed.
• If Simple is selected, the Window width, [traces] is the filter length along the trace axis, and
Threshold is the median filter threshold (the median filter is applied only to those samples of
|𝑚𝑒𝑑𝑖𝑎𝑛 𝑣𝑎𝑙𝑢𝑒|
the data, for which |𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 𝑑𝑎𝑡𝑎 𝑣𝑎𝑙𝑢𝑒| > Threshold).
• If TFD is selected, the module contains the same parameters present in the TFD Noise
Attenuation module, refer to the corresponding section of the manual.
• F-X Predictive Filtering on/off – a switch to turn on or off the F-X Predictive Filtering inside
the deblending loop.
• Turn off the F-X Filter for last N iterations – the number of iterations at the end of the
inversion where the F-X Predictive Filtering is turned off (same as for the median filter above).
• The remaining parameters are explained in the section of the manual on F-X Predictive
Filtering.
Theory
Deblending is a noise removal algorithm which is needed to remove blending noise. Blending noise is
the type of noise which appears when the shot interval in the survey is smaller than maximum
recording time (e.g., average 2 s interval between shots with 4 s desired recording time). This noise
also appears when several shot vessels are acquiring the data at the same time. The recording in such
cases is continuous, the data are then sliced into CSGs (common shot gathers) using timestamps.
The easiest way to deal with the blending noise is to apply any noise burst removal algorithm in a
domain where this noise appears random (CMP, CRG, common offset). Still, there are more accurate
methods for this. These often solve an inverse problem to estimate the denoised signal.
Sparse inversion is the method used in the Deblending module in RadExPro. Various modifications
of the same idea can be found in the literature (Abma et al., 2015; Bahia et al., 2021; Chen, 2015;
Mahdad, 2012; Velasques, 2020). All these algorithms involve supplying an input dataset where the
signal is coherent, and the blending noise is not. Then, the algorithm conducts iterative estimation of
the denoised dataset by applying a set of coherence-pass filters to the data followed by thresholding in
a domain where the data is sparse. At each iteration, the threshold decreases. The idea is that the
blending noise is suppressed at every iteration, and only the signal is passing the threshold. At the
following iteration, the signal picked up by the threshold is used to model and remove the blending
noise caused by this signal. The algorithm continues until the threshold is small enough and the
blending noise is completely removed.
In our modification, the user can apply median filtering, F-X predictive filtering and F-K filtering (in
this order) as coherence-pass filters. The thresholding can take place in F-K domain or time domain.
An example of the algorithm’s application to synthetic common-channel data is shown in the Figure
below.
The following example shows the application of the module to one ensemble of a synthetic flip-flop
dataset in CHAN:GUN_ID:SOU_SLOC sorting with a large blending ratio of 4. Header for
separation within ensemble is set to GUN_ID here.
Next, we show an example of the algorithm’s application to a field dataset. Three common-shot
gathers before and after deblending are displayed below, as well as the difference between them. Note
that the deblending was still carried out in the common-channel domain. One can observe that the
removal of blending noise reveals low-amplitude reflected waves (highlighted by orange dashed lines).
Also, note that each gather has blending noise from the next and the previous shots removed (this can
be observed on the difference). Note that a part of blending noise from the previous shot still remains
in the data (blue rectangle). This is not an issue for this dataset, as this noise appears above the seafloor
and can be muted. In order to remove this noise during deblending, longer traces need to be supplied to
the algorithm (i.e., the user needs to redo the data slicing with a longer trace length).
Noise
from
previous
shot
Noise
from
next
shot
Finally, we provide one more example of the algorithm’s application to a common-channel gather in a
field marine dataset. Note how the algorithm preserves the continuity of the horizons originally
completely covered by blending noise (highlighted with orange arrows).
Before deblending After deblending
References
Abma, R., Howe, D., Foster, M., Ahmed, I., Tanis, M., Zhang, Q., ... & Alexander, G. (2015).
Independent simultaneous source acquisition and processing. Geophysics, 80(6), WD37-WD44.
Bahia, B., Lin, R., & Sacchi, M. (2021). Regularization by denoising for simultaneous source
separation. Geophysics, 86(6), P69-P83.
Chen, Y. (2015). Iterative deblending with multiple constraints based on shaping regularization. IEEE
Geoscience and Remote Sensing Letters, 12(11), 2247-2251.
Velasques, M. M. (2020). Seismic deblending: using iterative and compressive sensing methods to
quantify blending noise impact on 4D projects. Colorado School of Mines.
Zhou, Y. (2017). A POCS method for iterative deblending constrained by a blending mask. Journal of
Applied Geophysics, 138, 245-254.
Trace Editing
Trace editing
This module makes it possible to exclude invalid traces (dead) and/or record intervals (muting) from
initial record.
1. Initial data very often contains traces with extremely high amplitude values. It is caused by
technical noise of different nature and has nothing in common with reflections from objects of
medium. Traces identified by the user are being muted in order to eliminate their influence on
processing results and interpretation quality. However, in order to preserve real horizontal scale
they are not excluded from data.
2. As a rule, the intensive noise which does not interfere with the desired trace fragment is registered
at start and end time of seismic record. Almost in all types of processing they are subject to
different transforms and significantly affect the result. Muting of some rectangular/rectangulars
in a processed panel helps to neutralize this effect.
Parameters
Muting specify the type of editing:
• Top muting - allows muting of trace fragments from zero time to time defined by the user.
• Bottom muting - allows muting of trace fragments from time defined by the user up to
maximum time.
• Muting in window – mutes the trace segments within the specified symmetrical window
relative to the user-defined times.
• Surgical Muting – mutes trace segment between two pre-defined horizons
When Top/ Bottom muting is selected, the Taper window length field becomes active.
Taper window length, [ms] – length of the time window (in ms) within which the trace amplitudes
will be tapered after muting is applied.
Horizon – specifying the horizon relative to which muting will be performed. The horizon can be set
as a pick from the database ( ), trace header ( ) and manually in the Specify text field ( ):
An example:
CDP
0-50:500,70:300
In this example:
0-50 - field values, in this case these are CDP points numbers; 500 - time, etc.
Second Horizon tab will become active, when Surgical Muting option in Muting is selected.
Parameters for the horizons selection are similar.
The Editing section allows editing the entire traces selected by the user on the Horizon parameter.
This module is applied to change the number of samples in the traces of the flow. In case when the new
value samples number is smaller than the old one then traces are being cut off. Otherwise, they are
supplemented with zeroes to comply with a new length. This procedure can be useful in the following
cases:
1. When you need to reduce the data from different input sequences to one number of traces
2. When you need to add zero samples to traces in order to provide space for static shift
3. When you need to cut unneeded trace fragment off in order to increase processing speed
4. When this module is activated the following window appears:
where, in the New trace length field, specify a new length of trace (expressed in ms).
Trace Math
The Trace Math module is applied to conduct arithmetic operations with adjacent traces or with traces
and scalar.
There are two ways to carry out the module: trace with trace and trace with scalar. When the first method
is used, every trace at the module output is a combination of two traces at the input. If the number of
traces is an odd number then the last trace will remain unchanged.
Parameters
Mode - selection of module execution method:
Trace/Scalar method
In the Trace/Scalar method in the Value field, by selecting the Constant option in the Setting mode
parameter, specify the scalar value which will be applied to accomplish selected operation. You can
also use for every trace a value written in one of its header fields. To do this, select the Variable
option and in the Header field, replace the value by the header field name.
The following operations are possible between the trace and the scalar:
The option Divide into Scalar requires specification of threshold value in the Divide Threshold field.
In case when the absolute value of the sample is smaller than the threshold defined by the specified
parameter then division will not be accomplished.
Trace/Trace method
The option Divide traces requires specification of threshold value in the Divide Threshold field. In case
when absolute count values are smaller than threshold one defined by specified parameter then division
will not be accomplished.
Find/replace NaN
This module is used to find corrupted floating point values, i.e. Not-a-Numbers (NaNs) in the data. Such
values can appear as a result of errors in reading or processing of the data (for example, because of
reading of a floating point number from a wrong byte or as a result of dividing a number by zero). In a
processing flow, this module reports position of all NaNs found in the data (trace number and sample
number) to the log-file of the flow. To replace NaNs with zeros, check the box Replace not finite
numbers with zeros and click OК:
:
Ensemble Sliding Summation
The module is intended for averaging the noisy data overseveral ensembles. This need often occurs
during marine acquisition when it is necessary to obtain a signal with a stable frequency response, for
example, for one FFID.
Input data: dataset containing several trace ensembles. Most frequently the data is sorted for the module
input as follows: FFID:CHAN or FFID:OFFSET.
ATTENTION! In general, the number of traces in the ensembles can vary. But it is recommended to
input the datasets, the ensembles of which contain the same number of traces.
A schematically shown dataset consisting of 9 ensembles. The figure shows the windows in which the traces
are averaged and the step between the windows.
The algorithm averages the traces inside the window step by step: traces with the same sequence
number within the ensembles are summed up, and the result is divided by their quantity.
Algorithm:
2) The number of traces inside the window is counted for each ensemble.
3) Each ensemble trace is given a sequence number from 1 to N, where N is the number of traces in the
largest ensemble inside the window.
4) The algorithm verifies the availability of traces with number X (1<X≤N). Traces having the same
sequence number are added, and the result is divided by their number.
5) The result of such averaging is an ensemble consisting of averaged traces (see point 4). Their
quantity is N.
Parameters
• Number of ensembles – the width of the window inside of which the traces are averaged.
• Step – the step between the windows.
Deconvolution
This module is obsolete. Use the Custom Impulse Trace Transforms module for deconvolution
calculations.
This module is meant to accomplish deterministic deconvolution. As a matter of fact, this module can
accomplish not only the deconvolution but any other spectral operations with traces and specified
signal as well.
Theory
Deconvolution is an inverse filtering of a signal with defined source pulse. The registered reflected signal
(trace) is interpreted within linear model defined by convolution integral. Within the framework of this
model the frequency characteristic of a trace is a product of characteristic of receiving-recording channel
and frequency characteristic of the medium. The latter includes distortions induced by the medium which
are actually the interpretation errors of observed data (for example, multiple reflections, ghost-waves
formed on the Earth's surface and the waves similar to them). The essence of the deterministic
deconvolution consists in that the frequency characteristic of reflected signal (trace) is divided by
frequency characteristic of initial impulse (i.e. the impulse of receiving-recording channel). Ideally,
when doing this, obtain the frequency characteristic of the medium which, when converting it to time
domain (if you do not take into consideration stated above distortions caused by the medium), will return
reflection coefficients section at the output. In practice due to some reasons it is impossible. However
the deterministic deconvolution allows significant narrowing of the signal by increasing data resolution
ratio.
Before deterministic deconvolution application, determine the impulse of source for inverse filtering
(signature). This impulse should be saved in file. This file must contain the sequence of real numbers in
R4 format (IEEE). To obtain signature you can use the procedure described below.
1. Select a fragment of profile with traces you would like to use for signature determination
2. Pick the reflected signal which will be used for deconvolution via the picking function of the
Screen Display module. To speed up the operation you can use the automatic pick mode. The
pick must be saved either in the database (the Tools|Pick|Save module command) or in the text
file (the Tools|Pick|Export... module command).
3. Reduce the impulses to common time via the Apply Statics module with static corrections
obtained at a previous stage.
4. Sum corrected traces via the Ensemble Stack module.
5. Save the obtained trace into file via the Data Output module having the following parameters:
▪ Format: User-defined
▪ File passport: 0
▪ Trace Passport: 0
▪ Trace points: the value is selected in accordance with impulse length (including
impulse delay).
You can use the number of samples of input data flow.
▪ Data format: R4
▪ From x, to x, From t, to t to be set in accordance with the values containing impulse.
Parameters
• File – a file that contains the trace with signature. To define the file name, click the Browse...
button or enter the name and the path manually;
• t1, t2 – start and end time of signature in trace (it is considered that the first sample has zero
time);
• dt - sampling step of the trace with signature;
Deconvolution parameters:
• t1, t2 — start and end time of vertical window to which the deconvolution will be applied
(usually it is set so to include the hole trace time range).
The Amplitude speсtra group of parameters defines what operation to be applied to amplitude spectrum
of every trace and amplitude spectrum of impulse:
The Phase speсtra group of parameters defines the type of operation applied to phase spectrum of every
trace and to impulse phase spectrum:
Different combinations of operations applied to amplitude and phase spectra of data are the following:
Theory
This module conducts a set of operations of phase and amplitude spectrum of trace and impulse (set of
impulses).
Different combinations of operations of phase and amplitude spectrum are the following:
The Get impulse from dataset group of parameters allows selection of the impulse (impulses) from the
project database:
• Dataset – a dataset from which the impulses can be selected. To select a dataset, click the Select
object… button. Besides the selection from the database, it is possible to specify the directory
where the desired dataset is located with the Select location... ( ) button. After the directory
is selected, the name of the dataset is put in the Dataset line either directly or with the help of
the replica system (see the Replica system section).
• Matching header names – a field according to which the trace selection is carried out, i.e.
according to the value of this field the trace searching within selected dataset is accomplished for
every trace.
• Time interval, [ms] – the start and end time of time window for the impulse (to do this the Get
time from Manual input mode should be chosen).
• Time interval from headers, [ms] – using the values of header fields of seismic traces as time
interval (to do this the Get time from Wavelet dataset header mode should be chosen).
• Zero time – the zero time of the pulse (available in Manual input mode in the parameter Get
zero time from).
• Zero time header – zero time of the pulse written to one of the dataset headers (available in
Wavelet dataset header mode in the parameter Get zero time from).
• File - is a file containing trace with impulse. This file must contain the sequence of real numbers
in R4 format (IEEE). To select a file, click the Select file... button.
• Time interval, [ms] - the start and end time of time window for the impulse. (The first trace
sample with impulse is considered to be the time zero).
• Dt, [ms] - sampling step in trace with impulse.
• Zero time, [ms] – the zero time of the pulse
Application window parameters:
• Time interval, [ms] – the start and end time of time data window in which the selected spectra
transform will be carried out (to do this the Get time from Manual input mode should be
chosen).
• Time interval from headers - using header field values of seismic traces as the start and end
time of time data window (to do this the Get time from Trace header mode should be chosen).
• Use tapering – you can use this option to minimize edge effects. When the Taper data option
is on, the indicated in the Window field percentage of trace length will be tapered at head and
tail of each trace. Otherwise, the tapering area of the specified length will be added to both sides
of the trace and filled with the mirrored values from the trace edges: A[N+1]=A[N],
A[N+2]=A[N-1]…
The Amplitude spectra group of parameters defines what operations to be done with amplitude
spectrum:
The Phase spectra group of parameters defines what operations to be done with phase spectrum:
When the Preserve trace amplitudes box is enabled applying deconvolution operators to the trace does
not change its overall amplitude level. If checked, the operator's energy is normalized to 1 before use.
Number of threads – number of threads the process is subdivided into when using the module. It
ensures the module work acceleration. Its maximal value should match the number of computer
kernels.
Predictive Deconvolution
Theory
Predictive deconvolution is applied to remove coherent noise and to increase resolution ratio. The
reflected signal (trace) is interpreted within linear model assigned by convolution integral. Within the
framework of this model the frequency characteristic of trace - is a product of characteristic of receiving-
recording channel and frequency characteristic of the medium. The later includes distortions caused by
the medium which are actually the interpretation errors of observed data (for example, multiple
reflections, ghost-waves formed on the Earth's surface and the waves similar to them). Within the process
of predictive deconvolution the operator of linear filter applied for above mentioned distortions
removing is calculated. To do this, in order to solve Viner-Hopf equation, Viner-Levinson algorithm
with white noise adding is used and in order to reduce prediction error to minimum the least-squares
method is used.
Parameters
Time window – start and end time for deconvolution operator construction.
• Constant – the operator window boundaries are the same for all traces
• Variable – the borders of the operator calculation window for all traces are defined by the
headers specified by the user.
Prediction gap – prediction operator gap to be applied to input data expressed in ms.
Deconvolution operator length– deconvolution operator length expressed in ms. It defines the length
for autocorrelation function calculation.
Tapering length – length of interval for which a smooth tapering is to be done (ms).
White noise level - "white noise" level expressed in percents. This percentage of preliminary whitening
defines what percentage of white noise must be added to the original impulse characteristic.
Output operator – module output is a set of deconvolution operators. By default, the length of the
output operator is equal to the trace length; values after the actual end of the operator are filled with
zeros. It is also possible to set the length of the output operator manually.
Number of Threads– it is number of CPU cores to be used in computations. Each computer setup and
module requires preliminary tests on small volume to define best number of CPU’s to achieve best
computation speed. One can start tests with number of physical cores minus 1.
NOTE: Also, in order to increase record resolution the Predictive Deconvolution module can be used
for spiking deconvolution. To do this, in the Prediction gap field, specify prediction operator gap as
equal sampling step and in the Decon operator length field, specify deconvolution operator length as
equal at least the length of outgoing impulse.
Surface-Consistent Decon
The Surface-Consistent Decon module is designed to calculate inverse filters for surface-consistent
deconvolution (minimum phase or zerophase), as well as amplitude coefficients for surface-consistent
amplitude correction.
For the correct operation of the module, the headers defining the line and the point of excitation and
reception should be correctly filled in the input dataset: S_LINE, SOU_SLOC, R_LINE, REC_SLOC,
as well as the header with the offset values – OFFSET.
The surface-consistent deconvolution/amplitude correction itself is carried out in 2 stages. At the first
stage (in the 1st flow), deconvolution operators and amplitude corrections for each RP and SP are
calculated. At the second stage (in another flow) they are applied using the Custom Impulse Trace
Transform and Trace Math modules.
Brief theory
The method of surface-consistent deconvolution/amplitude correction is based on the idea of a
convolutional model of a trace. The recorded signal, in addition to the reflection coefficients, is
influenced by the initial impulse, the surface conditions at the source, the surface conditions at the
receiver, the distance between the receiver and the source, and a number of other factors. These effects
can be described by linear filters, with which the sequence of reflection coefficients is convolved. They
can be estimated from a statistical sample of traces, and then their effect on the record can be removed
using deconvolution.
In addition to the influence of these filters, additive noise, such as surface waves, industrial noise, and
others, affects the spectra of seismic records. A single-channel deconvolution, in which one operator per
trace is calculated, cannot separate these two factors, which leads to a non-optimal result and distortions
of the useful signal. The surface-consistent deconvolution reduces the effect of additive noise on the
result, because instead of one operator per trace, one operator is calculated for each unique component
— a source point (SP) and a receiver point (RP), which leads to large statistical redundancy.
The same RP and SP in different seismograms correspond to different offsets and, accordingly, different
times of arrival of noise waves. Therefore, by limiting the window for calculating deconvolution
operators by time and by offsets, it is possible to calculate inverse filters from noise-free data for all (or
almost all) SP and RP. Further, these filters can be applied to all traces, including those complicated by
noise. This leads to a more stable and less noisy result compared to single-channel deconvolution.
Module parameters
The module parameters dialog is as follows:
Dataset – select the input dataset from the database, for the traces of which amplitude coefficients and
operators for surface-consistent deconvolution will be calculated.
Area group of parameters defines a space-time analysis window that will be used to calculate
deconvolution operators and amplitude coefficients. This window should not have a surface wave, first
breaks and, if possible, other noise waves. At the same time, it should cover the widest possible range
of offsets in order to provide statistical redundancy for the maximum possible amount of SP and RP.
The time boundaries of the window can be constant, or determined by two horizons specified in the trace
headers. The spatial boundaries of the window are set by the minimum and maximum offset.
• Constant rectangle – the time interval is set by two constant boundaries (the window will turn
out to be rectangular):
a) Min.time (ms) – the lower window edge in ms.
b) Max.time (ms) – the upper window edge in ms.
• Bondaries – the time interval is set by the horizons from the headers of the traces:
a) Top boundary header – select a header for the upper window edge.
b) Bottom boundary header – select a header for the lower window edge.
• Offset constraints – here the spatial boundaries of the window are set, i.e. range of offsets.
a) Min offset (m) – minimum offset in m.
b) Max offset (m) – maximum offset in m.
• Min.window length (ms) – the minimum length of the window on the trace, necessary for the
calculation. If on any trace the time window length is less than the specified value, this trace will
be excluded from the calculations.
• Min. Fold – the minimum number of traces required to calculate the operator for this component
(SP or RP). If there are not enough traces related to this particular SP/RP, the operators will not
be calculated for this component under the assumption that there is not enough statistics for them.
• Amplitude rejection (%) - exclude traces with the highest and the lowest average amplitudes
from the evaluation . Use parameter to specify the rejection threshold (here, 10% would mean
that 5% of the highest and 5% of the lowest amplitudes will be rejected, 0% would allow all
traces to be used
Find surface-consistent amplitude gain – if checked, the algorithm will calculate the amplitude
coefficients and record them to the selected headers of the input dataset:
• Sources amp.gain header – select the header into which the amplitude coefficients for
the SP component will be recorded.
• Receiver amp.gain header – select the header into which the amplitude coefficients for
the RP component will be written.
Amplitude estimation method – a way to get per trace estimates of amplitudes in a window:
• Mean – average (in absolute value) amplitudes values are used.
• RMS — RMS amplitudes values are used.
The amplitude estimation shall be obtained by the selected method for each trace that falls into the spatial
boundaries of the analysis window, from all the samples that fell into the time window. The obtained
trace-by-trace amplitude estimates are decomposed into components (SP and RP) using the DRM
(diminishing residual matrices) method.
At the final stage, the obtained amplitude estimates for the components are converted into correction
amplitude coefficients. For each SP, the correction amplitude coefficient is calculated as the ratio of the
average value of the amplitude estimates for all SP to the value of the amplitude estimate for this SP.
For RP, correction amplitude coefficients are calculated similarly.
Find surface-consistent operators – if checked, the algorithm will calculate the deconvolution
operators (inverse filters) for each unique component (each SP and each RP) and release them into the
flow. With that said, the operators related to the SP component will have correctly filled headers defining
the SP (S_LINE and SOU_SLOC), while headers defining the RP (R_LINE and REC_SLOC) will have
value -1. The headers for the RP components operators will be filled in the opposite way, respectively.
Such filling in the headers allows to unequivocally find the necessary operators for each trace at the
stage of applying inverse filters.
The following headers are additionally filled in for operator traces (they can be used in the parameters
of the Custom Impulse Trace Transform module at the stage of application of operators):
• Operator start time header – the header in which the operator start time in ms will be
written (TLIVE_S by default).
• Operator end time header – the header in which the operator end time in ms will be
written (TFULL_S by default)
• Operator zero-time header – the header to which the position of the operator’s zero-
time will be written (TZERO by default).
Please note that Trace Input is not needed in this flow, as the module itself works with the input dataset.
The headers defining the line and the point of excitation and reception should be correctly filled in the
dataset: S_LINE, SOU_SLOC, R_LINE, REC_SLOC, as well as the header with the offset values –
OFFSET.
Trace Output is needed only if deconvolution operators are calculated – it saves them in the database. It
is not needed if only amplitude coefficients are calculated.
The calculated amplitude coefficients are stored in the selected headers of the input dataset.
Read the data, use the upper muting by Trace Editing module (recommended). Next, with 2 instances of
the Custom Impulse Trace Transforms module, we use the operators, separately for SP, separately for
RP.
For SP, we define the operator for the trace using the headers s_line, sou_sloc:
For RP, we define the operator for the trace using the headers r_line, rec_sloc:
The rest is the same: the operator’s start, end and zero times are taken from the corresponding headers,
where the deconvolution module wrote them.
Since the operators are inverse filters, we divide the amplitude spectra, subtract the phase ones, which
corresponds to the deconvolution operation.
It is recommended to check the Preserve trace amplitudes box so that applying deconvolution operators
to the trace does not change its overall amplitude level. If checked, the operator's energy is normalized
to 1 before use.
After that, with 2 instances of the Trace Math module, we apply amplitude corrections, separately for
SP, separately for RP. To do this, MULTIPLY the values of traces amplitudes by corrections recorded
in the headers. In our case:
Accordingly, operators and amplitude corrections can be applied both together and separately.
Nonstationary predictive deconvolution
If subtrahend field is raw field, shifted by one sample, and it is input to an algorithm of adaptive
subtraction of wave fields (realized in module Wavefield Subtraction), then the calculated filter will
be considered to be a predictive-error filter. Thus the procedure of subtraction gets the significance of
predictive deconvolution.
If we set non-zero number of special basis functions, this deconvolution equivalent can theoretically
correct the wavelet shape versus time variation, caused by different reasons, and that’s why the amplitude
spectrum equalization can be better, then as a result of conventional deconvolution. Basically you can
input shifted data into wavefield subtraction algorithm independently, but in this case the band pass
transform will be performed after a shift, that is not quite correct. For that reason and as well for
convenience, a separate module is realized in the program, in which the process works as non-stationary
deconvolution, described above. In given mode only one wavefield is input, the field to which we have
to apply predictive deconvolution. The subtraction algorithm is identical to the wave fields’ subtraction
algorithm, realized in the module Wave field subtraction.
That’s why the description of parameters, that are in general similar to parameters of the indicated
module, will be done in the context of subtraction of one wavefield from another one.
Input data
• Zero phase input data – flag that indicates that the input data are zero-phase. The given option
works only is stationary mode. At the same time the filter is recalculated into its’ zero-phase
analogue.
Multiplication parameters
The given parameters’ set describes the characteristics of the basis functions that are used in
multiplication of subtrahend field traces.
The process algorithm contains, in particular, the calculation of the shaping filter using the least
squares technique, so that the filtered subtrahend field in the sense of the least squared deviation would
be close to the initial field. This filter is considered to be a multichannel filter, while the additional
channels for its calculation and subsequent convolution are taken from the raw trace, by means of
adding parametrized non-stationarity in time domain. More particularly, the assignment of each
multiplication function adds a channel, formed by the multiplication of the raw trace by function:
, where n – number of multiplication function, while alpha – is so called Exponent parameter, used
for fine tuning, and usually is equal to 1.
Thus, without consideration of Exponent parameter we can say that if Number of basis function is
0, the shaping filter is considered to be single channel; if this parameter is equal to 1, the trace,
multiplied by time is added to the shaping; if it is equal to 2, a trace with squared non-stationarity is
added etc.
The more we assign multiplication functions the better we subtract one field from the other; but we
have to take care, as in this case we can subtract undue value from the raw field. You should come to a
compromise, and the number of multiplication functions is selected individually proceeding from the
processor’s experience in each case. You should also keep in mind that the extent of subtraction
depends on the other parameters, particularly the filter length and the length of working window. The
more the filter length and the less the window length, the better we subtract (we imply that we subtract
undue value).
Processing windows
Considering the records’ nature, sometimes the processing should be done in different windows
individually. The procedure supports the division by windows using picks’ assignment – processing
windows’ boundaries. Thus if there is no boundary in the list, we conclude that there is only one window
from the start to the end of trace. If only one pick is assigned, the whole processing area is divided into
two windows: one – from the start to the pick, another from the pick to the end of trace. If you assign
two picks, we have three processing windows respectively, etc. The boundaries of picks are
added/removed using the buttons Add and Delete, the current set of boundaries is represented in the
list. The user should insure that the boundaries do not intersect, since the program behavior is
unpredictable over intersecting boundaries. As well you shouldn’t use too narrow windows, as there will
be subtraction to zero that is no good. However sometimes it is useful to set a narrow window and mark
is as non-active (i.e. there will be no subtraction in this window); in such a way you can perform a so-
called muting of areas that are unsuitable for subtraction.
Tapering length parameter sets an area of results’ stitching in different windows. It is measured in
samples
Subtraction parameters.
• Window use – sets the sign of whether to perform subtraction in the given window or to keep
unchanged the raw field. (1 – subtraction in the window should be performed, 0 – should not be
performed, correspondingly).
• Filter length - sets the length of shaping filter, in samples.
• Hamming tapering window length – specifies the size of Hamming window in samples. If
there is 0, the Hamming window is out of use.
• White noise level – regularization parameter. Fractional additive to the main diagonal of
autocorrelation matrix while solving the system, sometimes for some reason is called white
noise level.
Filter averaging base – sets the arm of averaging base of autocorrelation and cross-correlation
matrixes during filter calculation. In other words the shaping filter is averaged by a certain number of
traces.
Other parameters
Accuracy – accuracy of calculation. Parameter is used, in particular, while comparing the traces’
energy before and after filtering.
Band transform
You can perform filtering, within full bandwidth as well as within a limited bandwidth. The button Band
transform is designed to toggle between the modes.
When you use the mode of limited bandwidth, re-sampling is carried out, more precisely, band transform
(see report attachment) of the data from the specified bandwidth to the whole area. An inverse band
transform is applied to the result after carrying out the subtraction. This stratagem is used because the
algorithm is designed for the work in full frequency range, while practically for some reason it is
preferable to use a limited bandwidth. Low frequency and High frequency parameters assign low and
high frequencies of the used bandwidth, correspondingly
Due to properties with depth variation it is helpful to divide the whole data area into processing windows,
and to use specific processing parameters for each separate window. It is admitted by a number of
parameters of the module, namely:
Number of basis function, Exponent parameter, Windows use, Filter length, Hamming tapering
window length, White noise level, Low frequency and High frequency.
In this case the values are specified using separation character ‘:’.
For example, subtraction parameters for three processing windows can be written as follows:
Static Corrections
Calculate Statics
The module is designed for the calculation of datum static corrections for PP and PS waves. The values
of corrections can be calculated for each trace for PP, as well as for PS waves. The acquired values are
written down to the header fields indicated by the user.
Attention: The following header fields should be present in order to work in the project: SOU_STAT,
REC_STAT, SOU_STAT1, REC_STAT1.
The traces from the flow with any sort order are input in this module. The source and receiver positions
are determined from the header fields indicated by the user. The altitudes and the source depths, the
altitudes and /or uphole time values are taken from the text files or from the header fields indicated by
the user. Vp and Vs velocities (in weathering layer and the layer from the weathering zone to the final
datum) are taken from the editing field in the parameters window and can vary in lateral direction.
(uphole time – travetime from the source in the borehole, upwards to the earth surface. The borehole
source is supposed to be places below the weathering zone (low-velocity layer or weathering).)
The unchanged traces with the initial sort order are the output from the module. Shot and receiver static
corrections for PP waves are written to the header fields SOU_STAT and REC_STAT subject to user
selection and to the header fields SOU_STAT1 and REC_STAT1 - for PS waves.
Parameters
Select the calculation method of static corrections on the tab Calculation method:
• PP-statics – option that indicates that corrections for Р-waves will be calculated.
• PS-statics – option that indicates that corrections for РS-waves will be calculated.
Reference headers – you should indicate the header fields, including the stake numbers with source and
receiver positions.
• Source reference field – header field that is used to determine the source position (source stake
number), SOURCE by default.
• Receiver reference field - header field that is used to determine the receiver position (receiver
stake number), RECNO by default. (We believe that the stake numeration is the same for sources
and receivers, i.e. if SOURCE=10, then the source was situated at the stake number 10, while
RECNO=10, means that the receiver was at the same place)
• Shot holes using uphole times – uphole times are used for corrections calculation (by default).
• Shot holes ignoring uphole times – the values of uphole times are ignored. The traveltime
through the weathering zone is determined from the values Weathering Vp and Weathering Vs.
• Surface source – the sources are situated on the surface, uphole time is not used, the effective
velocities Replacement Vp and Replacement Vs are used for calculation.
• Surface source with weathering – the sources are situated on the surface, uphole value is not
used, the depth of the weathering layer and velocity are set for profile stakes (similar to the
borehole depth).
Further you have to set velocity values (as well as thickness of weathering zone, if it is used) that will
be used for statics calculation on the Velocity tab:
A set of fields that are available in the tab, depend on the selected options in Calculation method.
You have to set the values to all accessible fields.
The values can change laterally (they are interpolated linearly between the stakes). The syntax of the
values on the tab is the following:
number_stake : value, number_stake – number _stake : value
• Replacement Vp – the values of Vp associated with the stakes numbers are indicated in the
editing field.
• Replacement Vs – (accessible if the option PS-statics is on) the values of Vs associated with
the stakes numbers are indicated in the editing field.
• Weathering Vp – (is accessible, if you choose Shot holes ignoring uphole times or Surface
source with weathering in the field Calculation method) the values of Vp in the weathering
layer are assigned in association with the stake numbers in the editing field.
• Weathering Vs – (is accessible, if you choose Shot Holes Ignoring Uphole Times or Surface
source with weathering and the option PS-statics is toggled on in the field Calculation
method) the values of Vs in the weathering layer are assigned in association with the stake
numbers in the editing field.
• Weathering depths – (is accessible, if you choose Shot holes ignoring uphole times in the
field Calculation method) the values of Vp in the weathering layer are assigned in association
with the stake numbers in the editing field.
And finally, you should indicate the source of each geometry parameter on the tab Geometry that you
are going to use when calculating statics, as well as the level.
• Geometry and upholes from file – option that allows indicating the text file (~A-format, see
Borehole Loading and visualization.../ File formats), from which the altitude/depths and upholes
values are read. If the option is toggled on, the Browse button for selecting files becomes
available. There is a possibility of reading several (or all) important geometry parameters from
the corresponding file columns (the column names for each parameter are enclosed below). If
the option is toggled off, all geometry values should be read from the trace headers.
• Source elevations
▪ from file – the option is available, if Geometry and upholes from file is toggled on. The
values will be read from the SE column of the selected file if you choose the option.
▪ from header – when you select this option you have to choose the header field,
containing the corresponding values from the pop-up window on the right.
• Source depths – (available in all cases, except the Calculation method: Surface Source)
▪ from file – the option is available if the Geometry and upholes from file is on. When
you select this option the values will be read from the SD column of the chosen file.
▪ from header - when you select this option you have to choose the header field, containing
the corresponding values from the pop-up list on the right.
• Uphole times – is accessible if you use the Calculation method: Shot Holes Using Uphole
Times
▪ from file – the option is available, if the Geometry and upholes from file is on. When
you choose this option the values will be read from the UT column of the chosen file.
▪ from header - when you select this option you have to choose the header field, containing
the corresponding values from the pop-up list on the right.
• Receiver elevations – surface altitude in receiver point
▪ from file – this option is available, if the Geometry and upholes from file is on. When
you choose this option the values will be read from the RE column of the chosen file.
▪ from header - when you select this option you have to choose the header field, containing
the corresponding values from the pop-up list on the right.
• Final datum elevation –altitude (in meters) of datum, to which the traces are reduced by means
of static corrections.
Module functioning
The static corrections for each trace are calculated using the following formulas:
• For PP:
▪ Shot statics: source_statics = (f_datum-
source_elevation+source_depth)/replacement_vp If you use Calculation method:
Surface source source_statics = (f_datum-source_elevation)/replacement_vp If you
use Calculation method: Surface source with weathering source_statics = (f_datum-
source_elevation+weathering_depth)/replacement_vp -- -
weathering_depth/weathering_vp
▪ Receiver statics
If you use Calculation method: Shot holes using uphole times:
receiver_statics=(f_datum-receiver_elevation+source_depth)/replacement_vp – uphole
receiver_statics=(f_datum-receiver_elevation+source_depth)/replacement_vp - - (receiver_elevation -
source_depth)/weathering_vp
receiver_statics=(f_datum-receiver_elevation+weathering_depth)/replacement_vp -
weathering_depth/weathering_vp
• For PS
▪ Shot statics
source_statics = (f_datum source_elevation+source_depth)/replacement_vp If you use
Calculation method: Surface source
source_statics = (f_datum-source_elevation)/replacement_vp If you use Calculation
method: Surface Source with Weathering
source_statics = (f_datum-source_elevation+weathering_depth)/replacement_vp –
- weathering_depth/weathering_vp
▪ Receiver statics If you use Calculation method: Shot holes using uphole times:
receiver_statics=(f_datum-receiver_elevation+source_depth)/replacement_vs -- uphole*
weathering_vp/weathering_vs
If you use Calculation method: Shot holes ignoring uphole times:
receiver_statics=(f_datum-receiver_elevation+source_depth)/replacement_vs --
(receiver_elevation - source_depth)/weathering_vs
If you use Calculation method: Surface Source: receiver_statics=(f_datum-
receiver_elevation)/replacement_vs
If you use Calculation method: Surface Source with Weathering
receiver_statics=(f_datum-receiver_elevation+weathering_depth)/replacement_vs -
weathering_depth/weathering_vs
Here:"source_elevation, source_depth, uphole – the values of surface elevation in receiver point,
depth of source position in the borehole and uphole time in receiver point (they should be set for
each source point). If there were no source point data on the given stake when calculating statics
– the values are linearly interpolated.
"receiver_elevation – elevation value of receiver point position (they should be set for each receiver
point).
"f_datum - datum, at which the data are reduced, it doesn’t change along the profile "velocities
replacement_vp, replacement_vs, weathering_vp, weathering_vs are given for separate stakes and are
linearly interpolated in between.
Apply Statics
This module is meant for application of static corrections which can be entered manually, read out
from the database or from the trace header fields. Static shifts should be expressed in milliseconds.
Negative values reduce time and shift the data upwards; positive values add the time and shift
reflections deeper in section. The program interpolates corrections between specified reference points.
Parameters
Horizon – selecting the ways to assign correction that will be applied to traces:
• Header - static correction setting from the header of database which can be selected in the
header list that appears when clicking the Specify header field ( ) button.
• Pick allows static corrections loading from database object that can be selected in the standard
dialog box that appears when clicking the Specify pick… ( ) button.
• Text option that allows manual correction setting in the window when clicking Specify text (
) button.
An example:
CDP
0-50:500,70:300
In this example:
0-50 - field values to which the correction is applied. In this case these are SDP points; 500 -
time, etc.
i.e. here, the CDP points traces from 0 to 50 will be shifted downward by 500 ms and the CDP
point trace with number 70 will be shifted downwards by 300 ms.
Relative to time allows adding constant shift to static corrections that can be specified in a respective
window;
Subtract static option makes it possible to apply static corrections with reversed sign;
Apply fractional statics option makes it possible to apply static corrections which value is smaller
than sampling step;
Max number of threads – number of threads into which the process will be split during the execution of
the module. This is used to accelerate the processing. The maximum value must never exceed the number
of CPU cores
MaxPower Autostatics*
MaxPower Autostatics calculates surface consistent static on 2D and 3D data by stack power
optimization.
Input data requirements:
For each active CDP in a given dataset a pilot trace will be generated. Pilot trace is generated by summing
traces in the defined time windows, aligned on the autostatic horizon over the number of CDP ensembles.
After, cross-correlation function between every active CDP ensemble trace and pilot trace is calculated.
Cross-correlations for every trace are then summed by sources and receivers separately. Static shifts,
which correspond to the maximum of summed cross-correlation functions will be assigned to
SOU_STAT and REC_STAT headers. Their sum will be assigned to TOT_STAT headers. For detailed
description of the algorithm see the following paper:
“Surface‐consistent residual statics estimation by stack‐power maximization”, Joshua Ronen, Jon F.
Claerbout GEOPHYSICS Dec. 1985, Vol. 50, No. 12, pp. 2759-2767
Parameters
Use current statics from header – if chosen, static shifts from SOU_STAT and REC_STAT headers
will be applied before static calculation. Resulting static shifts will be the sum of current header values
and calculated by the module.
Processing parameters:
Restriction parameters:
• Minimum smash fold – a minimum number of traces in the CDP ensemble, required for pilot
trace calculation. If CDP ensemble does not contain required number of traces it will be skipped
from calculation
• Minimum source fold – a minimum number of traces in shot gather ensemble, required for static
shift calculations. If source fold is less than this value, static for this source will be 0
• Minimum receiver fold – a minimum number of traces in receiver gather ensemble, required
for static shift calculations. If receiver fold is less than this value, static for this receiver will be
0
Save receiver statics to - set the name of the header where the calculated shifts for receiver will be
saved
Save source statics to - set the name of the header where the calculated shifts for source will be saved
Save total statics to - set the name of the header where the calculated shifts for both the receiver and
the source will be saved
RMS convergence criteria, [ms] – on every iteration RMS value of calculated statics is compared to
the defined criteria value. If the value is bigger than defined in criteria, static calculation will be finished.
Choose 0 to ignore this parameter.
Pick global maximum – if chosen, static shifts are calculated as an absolute maximum of summed
cross-correlation function. Otherwise, Local max of global max can be defined – % from absolute
maximum to find the local maximum as a static solution.
Autostatic horizons
• Time window length, [ms] – time window length to calculate statics. Time window is centered
on a selected horizon.
• Minimum live samples in a window, [%] – minimum live samples in a time window required
for statics calculation. If a number of live samples less than defined, window will be skipped
from calculation.
Limitations
Process the whole line – if chosen, all dataset will be taken for static calculation. Otherwise, define
CDP range:
• CDPs – starting and final CDP numbers (available in 2D case)
• Inlines – starting and final inlines number (available in 3D case)
• Xlines - starting and final crosslines number (available in 3D case)
Trim Static
The module calculates correlation static corrections within an ensemble of traces for a particular time
window. Input traces shall be NMO-corrected.
Output data: the static correction values will be written into the specified header of each trace. You can
apply the correction using the Apply Statics module.
The module produces a stack trace for each ensemble and evaluates a CCF of each trace of the ensemble
with the stack trace within the time window. Time of the CCF maximum is output as a static shift for
each trace.
Module parameters
Maximum shift (ms) – the value of the maximum possible shift of the trace.
Threshold level – if maximum of the CCF function of the pilot trace and the ensemble trace does not
exceed the Threshold level, the static correction will not be calculated for this trace.
Use pilot trace – choose how the pilot trace will be calculated:
• If option is not activated, the pilot trace will be calculated as the sum of all traces inside the
ensemble
• If the option is activated, the module will use the first trace of each ensemble as a pilot trace.
Thus, you can generate a set of pilot traces for ensembles in any way (for example, by averaging
over several neighboring ensembles, aligned to the horizon )
Time range – specify the time window for which the correlation statics will be calculated. There are 2
options to do this:
Left – original NMO-corrected CMP gather, right – the same with trim statics applied
Auto Statics*
The module is intended for calculation of residual statics for the surface conditions with terrestrial
seismic observations. Static shifts are calculated in relation to "pilot" traces on the basis of mutual
correlation functions (MCF) in a time window along the assigned horizons and are saved in the trace
header fields selected by the user. Further on, they can be inserted into the data in a separate flow by
means of Apply Statics module.
This module belongs to the class of independent (Stand-alone) modules, i.e. it does not require any other
I/O modules in the flow. The module reads a set of 2D or 3D data from the project database in the sorting
defined by the user, and a set of pickings. Pilot traces are calculated in the assigned time window along
pickings on a set of gathers or using the total section.
Static shifts are calculated in relation to "pilot" traces on the basis of mutual correlation functions (MCF)
in a time window along the assigned horizons. The shifts recognized as reliable are then saved in the
trace header fields selected by the user.
Further, there is an alignment of static shifts for the surface conditions. The result is saved in the form
of picking where only traces with admissible static shifts are involved: general picking (for the receiving-
emitting group), picking for the receiving group, picking for the emitting group.
For proper operation of the module, the data at the input must satisfy the following condition: the traces
are to have TLIVE_S, TFULL_S header fields filled. Field TLIVE_S is to contain starting time of the
operating (not resetted) trace area, TFULL_S is to contain end time of the operating trace area. Trace
counts outside the interval (TLIVE_ S – TFULL_S) module will be regarded as zero, irrespective of
their actual value. These headers help each trace to restrict the time span which will be used for pilot
creation, without explicit application of lower and upper muting procedure. If no additional time limit is
required, one is to set TLIVE_S to 0 and TFULL_S to the time range of the traces (for example, by
means of Trace Header Math module by setting two lines in parameters: “TLIVE_S = 0” and “TFULL_S
= ([NUMSMP]-1) * [dt]”).
Parameters
Dataset (selection of dataset for statics calculation). This tab shows the path to the dataset for which
statics will be calculated.
• Input dataset
Sorting field data sorting keys are set. For example, for 2-dimensional terrestrial survey CDP, CDP,
Offset fields can be used, while for 3-dimensional terrestrial survey XLine_No, Iline_No, Offset
(inline number, crossline number, offsetting) can be applied.
In the Horizons field the list of horizons pickings along which static shifts will be estimated is displayed.
The pickings are to be created in advance, for example, in Screen Display module.
It is important for pickings to be anchored to the same header fields specified in the first two sort keys
in the Sorting field in Dataset tab.
One can add or delete pickings from the list of horizons by means of buttons Add... or Delete...
accordingly.
In the Velocity field the velocity function acquired earlier in other processing flows at the vertical
velocity analysis is set (being the output velocity function of Interactive velocity analysis module).
Stacking velocities are used for registration of kinematic shift of traces in relation to the pilot set of
traces prior to calculation of residual statics.
Window size above horizon (ms) is the size of the window for mutual correlation functions calculation
above horizons, in ms.
Window size below horizon (ms) is the size of a window for mutual correlation functions calculation
below horizons, in ms.
This tab shows parameters for calculation of the pilot set of traces.
Super gathering base (gathers) – the number of assemblies which will compose the super gathering
base is assigned in the field. This parameter sets the amount of traces which will be used for acquisition
of the pilot trace. For each assembly of input data a super gathering base consisting of n assemblies is
formed (in equals to the value of Super gathering base parameter).
The assembly for which the super gathering base is formed is the central assembly in the super gathering
base (apart from “extreme” assemblies which fall back less than n/2 assemblies from the edges of the
data area accessible to the module). For extreme assemblies the super gathering base "window" moves
in the manner so that the window edge coincided with the limit of the accessible data. For example, for
the very first assembly the super gathering base will consist of first n assemblies, for the last one – of
last n assemblies.
Super gathering bases can consist of several CDPs (for 2D) or several In Lines, Cross Lines (for 3D).
The header field on which super gathering bases are formed is set in the first sort key of the Sorting
field of Dataset tab.
Offset binning.This group of parameters is intended for binning according the offsets. It is used by the
program only in case the Output pilot dataset option is enabled and the type of pilot set of traces Pilot
type: Gathers is selected.
If Data type: Stack is selected the parameters from this group are ignored.
If Output pilot dataset option is enabled, the button of output dataset for pilot traces selection (...)
and traces pilot set type selection become available:
Pilot type
• Gathers - seismograms
• Stack – stack
• Picking
Parameters of this tab are intended for calculation of static shifts in relation to the pilot set of traces.
Parameters assign the set of threshold values for calculation of admissible shifts of the pilot trace, shifts
of the initial trace in relation to the pilot one and criteria of their reliability, and the path to the database
file in which the calculated shifts for each trace will be stored.
In the process of shifts calculation the maximum admissible shift that will be considered at calculation
of statics is assigned and MCF of the initial and pilot traces is calculated. If the initial and pilot traces
coincide, the maximum MCF can be reached at zero value of the shift. Otherwise, within the set time
window MCF will have one absolute maximum (A) and numerous local maxima of various values in
proximity to the zero value. When calculating the shift, the nearest to zero local MCF maximum
exceeding some threshold value calculated as p*A, where p is the parameter set by the user, is searched.
When designing the pilot traces, the pickings created by the user are used. Picked values on particular
traces can deviate from the true time for horizons. Therefore, the acquired pilot traces can have shifts in
relation to pickings. For elimination of this shift, the pilot traces can be moved so that the position of the
nearest local pilot maximum coincided with the horizon time. The nearest local maximum is defined in
relation to the threshold value p*A (like in the case described above), and the value of admissible shift
is defined within the value set by the user.
Max. signal shift, ms is the maximum allowed wave pattern shift between adjacent traces that will be
considered at calculation of statics. If the calculated pilot trace shift exceeds the value of parameter Max
signal shift, the pilot trace shift is considered unreliable.
Local maximum level is the value p that sets the MCF threshold value of the trace and the pilot for
search of the position of the local maximum of MCF nearest to zero and exceeding the threshold value
while calculating the static shift.
Defines the file name where the calculated static shifts will be written. This parameter is optional and is
intended mainly for the analysis of the module operation and troubleshooting. In the operating mode the
shifts are saved in the output trace headers.
Headers
This group of parameters enables to set names to the headers where the calculated shifts and weights for
each horizon will be saved (lines Horizon 1, …), as well as the resultant weightaverage shift (line
Average). At the same time, field Time/shift contains the total of the calculated shift and the horizon
time for this trace for each horizon (time window). The toggle to the right of the line (Average, Horizon
1, …) enables selection of the values to be brought to the dataset trace headers at the module output. In
column Time/shift header fields intended for the time shifts are set, in column Weight header fields for
the weight outputs used at averaging of the shifts are set.
Calibration
• Weight threshold is the threshold value of estimation of the acquired static shift reliability used
for rejection of static shifts in the range of 0 to 1. Thus, if the calculated estimation of shift
reliability for any trace is below the threshold value, the shift for this trace is rejected and is not
used in the data coordination procedure for the surface conditions. If the value is 0 all shifts are
considered reliable and are used at calculation of surface coordination, if the value is 1 all shifts
are rejected.
• Accel. factor is the parameter that defines acceleration of the iteration process. The lowest
velocity of the process at this parameter value will be 1. If the parameter is higher, the velocity
increases. The higher the velocity the lower is the quality of operation, as it leads to the decrease
of the amount of traces participating in the surface-coordinated shifts determination procedure.
• Iterations field specifies the maximum amount of iterations in a process.
• Tolerance is the value of discrepancy which sets the limit when the iteration process will be
terminated before completion of the assigned maximum number of iterations.
• Save calibrated shifts is the option specifying the path to save the calculated surface-
coordinated shifts in the project database, if enabled.
• Save decomposed shifts saves/does not save the source-conditioned shifts (SRC) and the
receiver-conditioned shifts (REC) separately in accordance with the paths specified by the user.
Correlation Statics (calculation of automatic and correlation-based static
corrections)
This module allows calculating static trace shifts relative to the “reference” traces using the cross-
correlation function. The resulting static shifts are usually used in subsequent processing as
correlation-based static corrections (trim statics) or as input data for calculation of automatic static
corrections (auto statics). In some cases, this procedure can be used to refine the event correlation
(picking).
Module algorithm
The following data are input into the Correlation Statics module:
• data from the workflow containing the source data (for which the shifts will be calculated) and the
reference traces. The input dataset must contain one reference trace per ensemble (seismogram);
• set of horizons (picks) defining the time regions for which the cross-correlation functions and shifts
will be calculated;
• stacking velocities used to calculate the reflection travel time curves (static shifts are calculated taking
into account the kinematic shift of the traces relative to the reference trace).
1. Based on the specified horizons (picks) and time window sizes, the time window boundaries are
determined for each input data ensemble. After that, steps 2 and 3 are performed independently for
each time window.
2. Optionally (if the corresponding mode is enabled in the module parameters), the reference traces are
shifted in such a manner so that the position of the reference trace’s local maximum (the one closest
to the horizon) would match the horizon time.
3. For each ensemble (seismogram), the cross-correlation function is calculated between the reference
trace and each ensemble trace. The position of the nearest local maximum of the cross-correlation
function determines the static shift (for the given time window).
4. The resulting weighted-average trace shift relative to the reference trace is calculated for each source
trace. Averaging is performed between the shifts obtained for the specific trace independently in each
time window. The weight of each shift depends on the value of the local extremum of the cross-
correlation function between the source trace and the reference trace, and is equal to the ratio of the
found local maximum to the absolute maximum (p1). If the reference trace shift mode is enabled, the
weight also depends on the value of the local maximum of the reference trace matching the position
of the horizon after the shift. In this case, the “reference trace weight” (p2) is equal to the ratio of the
local maximum to the absolute maximum, and the final weight used for shift averaging is equal to
the geometric mean of p1 and p2 (SQRT(p1*p2)).
Depending on the execution mode, the module outputs either a reference trace set or a set of source
traces for which the shifts were calculated.
The source trace set headers contain the resulting static shift as well as the shifts obtained independently
for different time windows (more specifically, the sum of the shift and the basic horizon is output for
better visualization of the resulting correlation shifts in the form of picks or headers (on the time scale)
displayed on-screen in the Screen Display module) and the weights used for averaging.
The reference trace output mode is useful for verifying the accuracy of the module input data and
ensuring that the reference trace shift (if any) is applied correctly.
It is often possible (especially when not using the reference trace shift mode) to obtain the required
correlation shifts through a relatively rough (smoothed) picking of the basic horizon in the cross-section
without the need for detailed horizon picking.
The basic horizon times also have no direct effect on the amount of the static shift calculated as a result
of correlation with the reference trace.
Input data requirements
For proper functioning of the module within the workflow, the module input data must meet the
following requirements:
• The input data must contain the reference traces and the traces for which the shifts are to be calculated.
• For reference traces, the value of the TRC_TYPE field must be equal to 2. For source (shifted) traces,
the value of the TRC_TYPE field must be equal to 1.
• Each input data ensemble must consist of a single reference trace and one or more traces for which
the shifts are to be calculated. The reference trace must be the first one in the ensemble.
Example 1
Let us assume that we need to calculate the static trace shifts relative to a reference trace set prepared
for CDP seismograms. We have a dataset containing one reference trace per CDP with the header field
TRC_TYPE set to 2 for all traces. The source (shifted) traces are stored in another dataset which contains
one seismogram per CDP and has the TRC_TYPE field set to 1 for all traces. To obtain the required
sorting, we need to specify the following parameters for the Trace Input module:
In this example, the CDP number will be the ensemble key, the first trace of each ensemble at the output
of the Trace Input module will be the reference trace, and the remaining traces will represent the CDP
seismogram for the source data. The Picking module will calculate the shift of each input data trace
relative to the reference trace independently for each ensemble.
Example 2
For single-channel profiling data, we need to calculate static shifts of each profile trace relative to a
single reference trace. The reference trace is stored in a separate dataset, and the TRC_TYPE header is
set to 2 for the reference trace and to 1 for all profile traces. The XLINE_NO trace header field contains
the same profile number (for example, 1) in both datasets.
With this configuration of parameters, the entire input data set (combined from two datasets from the
database) will consist of a single ensemble (where the first sorting key will be the same for all traces)
beginning with the reference trace. The static shifts will be calculated for all profile traces.
Example 3
For single-channel profiling data, we need to calculate static shifts of each profile trace relative to a
reference trace specific to that input trace. We have a dataset containing one reference trace per profile
point with the profile point number specified in the CDP field and the TRC_TYPE header field set to 2
for all traces. The source (shifted) traces are stored in another dataset with the profile point number
specified in the CDP field and the TRC_TYPE field set to 1 for all traces.
With this configuration of parameters, each ensemble will consist of two traces – the reference trace (the
first one) and the profile trace. For each profile trace, the Correlation Statics module will calculate the
static shift relative to the reference trace.
The input data must be sorted by the same key as the one specified the tie-in fields of the picks defining
the reference trace time windows. For example, if CDP is used as the tie-in key for the picks, the input
data must have CDP as the first sorting key.
Parameters
The list contains the horizons defining the time window set. The number of time windows used is the
same as the number of horizons in the list. Each time window of the reference trace is defined as the
time interval from t0-t1 to t0+t2, where t0 is the time of the corresponding pick on that trace, t1 is the
value of the Window size above horizon parameter (see below), and t2 is the value of the Window size
below horizon parameter (see below). If the picking point is not defined for the trace, the t0 time is
obtained by linear interpolation between the nearest picking times.
You can add horizons from the system database using the Add… button to the right of the list. Pressing
the Delete button to the right of the list deletes the selected horizon from the list.
NOTE: The pick tie-in fields must be the same for all picks added to the list. The input data must be
sorted by the same key as the one specified in the tie-in fields of the picks. For example, if CDP is used
as the tie-in key for the picks, the input data must have CDP as the first sorting key.
Window size above horizon (ms), Window size below horizon (ms) – these parameters refer to the
horizon currently selected in the Horizons list, and define the boundaries of the time window (see the
description of Horizons).
Velocity – velocity model. Press the Browse… button to the right of the parameter to select a velocity
model from the database.
Specifying the stacking velocities allows accounting for the kinematic shift to ensure proper calculation
of the static shift. The stacking velocities are used to find the times on the source (shifted) traces relative
to which the shift will be calculated. When calculating the resulting shift for the specific time window,
the Tm(t0, Velocity) time is subtracted from the time found using the cross-correlation function
maximum, where Tm is the hyperbolic reflection travel time curve, t0 is the specified time on the
reference trace, and velocity is the stacking velocity for the CDP to which the trace belongs and the t0
time. This parameter is optional. If it is omitted, the Tm(t0, Velocity) is assumed to be equal to zero.
There is no need to configure the velocity model when inputting data with applied kinematic corrections
or a stack (or single-channel data with zero offset) into the module. In the first case (with kinematic
corrections applied), specifying the stacking velocities will result in incorrect stacking of the reference
trace data.
Output data determines the mode in which the module will be run.
In the Original mode, the module will output the source traces with the shifts calculated and saved to
the specified header fields (the output will not contain any reference traces). This is the primary
execution mode for the module.
In the Etalon gathers mode, the module will output the reference traces. This mode is useful for
verifying the accuracy of the input data and visually checking the reference trace shift results (if the
reference trace shift mode is enabled).
Shift etalon enables the reference trace shift mode. When this mode is activated, the reference traces are
shifted in such a manner so that the position of the nearest local maximum of the reference trace would
match the horizon time before calculating the static shifts for the given time window. When determining
the nearest local maximum, maximums with the amplitude of less then p*A are ignored, where A is the
reference trace’s absolute maximum value within the given time window, and p is the value of the Local
maximum level parameter.
If the calculated reference trace shift exceeds the value of the Max etalon shift parameter, no shift will
be applied to the reference trace.
Max. etalon shift, ms is used only when the reference trace shift mode is enabled. If the calculated
reference trace shift exceeds the value of the Max etalon shift parameter, no shift will be applied to the
reference trace.
Local maximum level – to calculate the trace shift relative to the reference trace, the module finds the
position of the nearest local maximum of the cross-correlation function between the source trace and the
reference trace the absolute value of which exceeds p*A, where A is the absolute maximum of the cross-
correlation function within the given time window, and p is the value of the Local maximum level
parameter.
Output picking file – allows specifying the name of the file where the calculated static shifts will be
written. This parameter is optional. In the normal execution mode, the shifts are saved to the output trace
headers. This option is provided for debugging of the module parameters.
Headers parameter group
This group of parameters is used to define the names of the headers where the shifts and weights
calculated for each horizon (lines Horizon 1, …) and the resulting weighted-average shift (Average
line) will be saved.
The sum of the calculated shift and horizon time for the corresponding trace is written to the Time/shift
field for each horizon (time window).
The checkboxes to the right of the (Average, Horizon 1, …) lines allow selecting which values will be
written to the dataset trace headers at the module output.
The Time/shift column is used to define the header fields where the time shifts will be saved. The output
header fields for the weights used for shift averaging are defined in the Weight column.
Calculate Statics 3D*
This is a standalone module, i.e. it does not require any input or output modules, and must be the only
module in the workflow.
The module is used to calculate static corrections for relief for PP and PS waves across an area. The
correction values are calculated for each trace and written to the SOU_STAT and REC_STAT headers.
ATTENTION: For the module to run correctly, the project must contain the following header fields:
SOU_STAT, REC_STAT.
The module uses the following headers depending on the selected execution mode:
The module works with traces loaded into the workflow. The sorting order is not important. The source
and receiver positions are determined from the corresponding headers.
If a text file is specified, the source elevations and depths as well as the receiver elevations and/or uphole
time values will be loaded from the file. Otherwise the values will be taken from the corresponding
headers. Uphole time is the wave travel time from the downhole source to the surface. It is assumed that
the downhole source is located below the weathering zone.
The Vp and Vs velocities (in the weathering zone and within the layer from the weathering zone to the
final datum level) are defined in the table.
As a result of processing, the module outputs static corrections for source and receiver positions for PP
waves to the SOU_STAT and REC_STAT header fields.
Parameters
Mode – select the correction calculation method:
• Surface source without weathering – the sources are on the surface, uphole is not used, and
only effective velocity Replacement Vr is used for the calculations.
• Surface source with weathering – the sources are on the surface, uphole is not used, and
weathering depths Wd (similar to the borehole depth) and velocities Replacement Vr and Vw
(weathering velocity) are defined for the profile points.
• Shot holes using uphole times – the corrections are calculated using the uphole time values and
effective velocity Replacement Vr.
• Shot holes ignoring uphole times – the uphole times are ignored, and effective velocity
Replacement Vr and weathering velocity Vw are used for the calculations. The wave travel time
in the weathering zone is determined from the Vw values.
• Receiver station – velocities are defined for the specified RPs. R_LINE is the receiver line
number, and RLock is the stake number.
• Source station – velocities are defined for the specified SPs. S_LINE is the source line
number, and SLock is the SP line number.
• Coordinates XY – velocities are defined for the points with the specified X and Y coordinates.
Picking (It is old module. We recommend you to use Statics Correction module)
Statics Correction
Purpose
The module is used to calculated traces’ static shifts relative to “reference” (pilot) traces on the basis of
cross correlation function. Static shifts, acquired in such a way, are used for a further processing as
correlation statics (trim statics) or as raw information for calculation automatic statics (auto statics). In
several cases the given procedure can be used in order to adjust events’ correlation (picking).
Application
• data from the flow, containing raw data (for which the shifts are calculated) and pilot traces.
The input dataset should contain one pilot trace for each ensemble (gather);
• set of horizons (picks), specifying the time intervals where cross correlation function and
shifts will be calculated;
• stacking velocities, used for reflected waves’ traveltime curves calculation (static shift
calculation is done after considering NMO shift of traces in relation to the pilot). Calculation
of static shifts is carried out by the module in the following succession.
• Time gate boundaries are determined for each input data ensembles on the basis of the
specified horizons (picks) and time window length. Further on the steps 2, 3 are carried out
independently for each time window.
• Optionally (if the corresponding mode is turned on in module’s parameters) traces’ shift of
the pilot can be done to make the position of the local pilot maximum (the closest to the
horizon) coincide with the horizon time value.
• Cross correlation function of the pilot trace and each trace in the ensemble is calculated for
each ensemble (seismic gather). The position of the closest local maximum of the cross
correlation function determines static shift (for the given time window).
• The resulting weight average shift of the trace in relation to the pilot is calculated for each
trace of raw data. Averaging is done between the shifts, acquired for given trace
independently for each time window. The weight of a specific shift depends on the value of
the local extremum of cross correlation function of the pilot and a trace and equals to the ratio
of the local maximum and the value of the absolute maximum (p1). If the mode of reference
shift is selected, the weight depends as well on the value of the local reference maximum,
which position corresponds to the horizon position after the shift. “Reference weight” (p2)
equals in this case to the ratio of the local and the absolute maximums, while the final weight,
with which the shift is averaged, equals to geometrical mean of p1 and p2 (SQRT(p1*p2)).
Depending on the mode of operation a set of pilot traces or a set of raw traces (for which the shifts
were calculated) is output from the module. The raw traces’ output mode is the main operational mode.
The indicated headers of the raw traces set contain a resultant static shift, as well as shifts, acquired
separately on different time gates (more precisely, the sum of shift and reference horizon is output, for
a more convenient visual control of the obtained correlation shifts in the form of picks or headers
output on the screen (in time domain) in Screen Display module), and weights, which were used for
averaging. The mode of pilot traces output is very convenient for validity check of input data to the
module and validity check of the reference shift (if the reference shift is applied).
Very often (especially when we do not use the mode of reference shift) we don’t need a detailed
picking of reference horizons inn order to obtain correlation shifts, we can confine ourselves to rather
rough (smoothed) horizon picking on the section.
The time marks of the reference horizon have no direct impact on static shift value, calculated as a
result of correlation with the reference.
The data input to the module in the flow should comply with the following conditions for the correct
module performance:
Input data should contain pilot traces, and traces for which the time shifts will be calculated.
Pilot traces should contain the following header field TRC_TYPE, equal to 2. Raw traces (subject to
shift) should have TRC_TYPE, equal to 1.
• Each ensemble of the input data should consist of one pilot trace and one or more traces, for
which the shifts will be calculated. The pilot trace should be in the first ensemble.
Example 1.
We need to calculate static shifts for the traces relative to the pilot, generated for CDP gathers.
Suppose, that the dataset is prepared that contains pilot, in which one trace corresponds to each CDP,
header field TRC_TYPE is equal to 2 for all traces. Raw (data being shifted) are located in another
dataset, containing one seismic gather for each CDP, header field TRC_TYPE is equal to 1 for every
trace. Thus, we set the following parameters in the module Trace Input:
Example 2.
We need to calculate static shifts for each trace of the profile relating to the single pilot trace in case of
single channel profiling. Pilot trace is located in a separate dataset, TRC_TYPE is equal to 2 for the
pilot trace, and 1 for every trace of the profile; Header field of the trace XLINE_NO contains one and
the same profile number (for example, 1) in both datasets.
• in Sort fields – XLINE_NO, TRC_TYPE, CDP (strictly in the indicated sequence). Instead of
CDP quite often there can be other header field, depending on the field which contains the
number of profile point. In the given example the presence of the third header sort field is not
mandatory, if we don’t need to guarantee the trace output order after calculating the shifts.
• in field Selection – the line "*:2, 1: *"
With this way of parameters’ selection, all the dataset read (joined from the two datasets within the
database), will represent one ensemble (the first sort key is the same for all traces); the first trace will
be the pilot trace. Static shifts will be calculated for all traces of the profile.
Example 3
We need to calculate static shifts of each trace of the profile related to its own pilot trace (acquired for
example, by means of sliding averaging of raw profile along the reference horizon, or using the
module Pilot) in case of single channel profiling. Let’s assume, that a dataset containing a pilot is
prepared, which contains one trace per each profile point, the number of profile point is given by CDP
field, the header field TRC_TYPE equals to 2 for every trace. The raw data (being shifted) is located in
another dataset, the number of a profile point is given by CDP field, TRC_TYPE field is equal to 1 for
all traces.
Input data sorting should correspond to pickings match fields, which determine time windows of the
pilot. For example if the match key of pickings is CDP, the first sort key of the input data should be
CDP.
Parameters
Horizons Tab
Horizons
The list represents horizons, which determine the set of time windows. A number of time windows,
which will be used is equal to the number of horizons in the list. Each time window for the given pilot
trace is determined as time interval from t0-t1 to t0+t2, where t0 – time mark of the corresponding pick
of the given trace, t1 – value of Window size above horizon parameter (see below), t2 – value of Window
size below horizon parameter (see below).
If the picking point is absent for the given trace, t0 is acquired by simple linear interpolation between
the neighboring picking points.
Using Add… button to the right of the list you can add a horizon from the system database. Delete button
to the right of the list removes the selected horizon from the list.
NOTE: pickings’ match fields should coincide for all picks selected to the list. The input data sorting
should correspond to pickings’ match fields. For instance if the match field of the pickings is CDP, the
first sort key of the input data should be CDP.
• Window size above horizon (ms), Window size below horizon (ms)
These parameters correspond to the currently selected horizon from the Horizons list, and specify
the boundary of time window (see description Horizons).
• Velocity. The Browse… button to the right of parameter allows selecting velocity model from
the database. Specification of stacking velocities allows considering NMO shift for correct
calculation of static shift. Stack velocities are used for calculation of time values on raw traces
(for which the shifts are calculated), with reference to which the shift is calculated. Tm(t0,
Velocity) is subtracted from the time value estimated from the maximum of cross correlation
function in order to calculate the final shift for the specified time window, where Tm – hyperbolic
traveltime curve function of reflected wave, t0 – the given time mark on pilot trace, velocity –
stacking velocity for CDP point, to which the trace is related, and time t0.
This parameter is not mandatory. In case this parameter is absent, Tm(t0, Velocity) is set to zero.
There is no need to assign velocity model, if the data input to the module are the data with applied
NMO correction, either stack (or single channel data with zero offset). In the first case (applied
NMO corrections) setting stack velocities will lead to incorrect stacking of pilot data.
Data Output
Specifies the filename, to which the calculated static shifts will be output. The given parameter is not
mandatory; the shifts are saved to the output traces’ headers in operation mode, the given possibility is
provided in order for module parameters’ debugging.
Tab Headers:
The given group of parameters allows specifying header names, to which the calculated shifts and the
weights for the given horizon will be saved (lines Horizon 1, …), as well as the resultant weight-averaged
shift (line Average)
In addition for each horizon (time window), a sum of the calculated shift and horizon time mark will be
written to the field Time/shift for the given trace.
A flag to the right of the line (Average, Horizon 1, …) allows selecting the values output to the trace
headers of the dataset at the output of the module.
The column Time/shift determines header fields, where the time shifts are output; the header fields for
weight output, which were used during shifts’ averaging, are set in the column Weight.
Refraction Statics*
The module is designed to calculate static corrections using information about the first breaks. The
module is of a Stand Alone type, and therefore it should be used in a separate flow.
1. First breaks picking is carried out using First Breaks Picking module
2. Selection of refractors on CDP super-seismograms is carried out by picking the first breaks
on the seismograms collected using SuperGather.
3. Static corrections calculation is carried out by means of the Refraction statics module
• picking of the first breaks of the input dataset recorded in one of its headers.
• picking refractors’ offsets and velocities;
• near surface section’s (NSS) 1st layer velocity;
• near surface section’s (NSS) replacement velocity;
• datum.
ILINE_NO Inline number. Required only if smoothing of the 1st refractor is selected
DEPTH (or another header Source depth from the daylight surface (ignored if the surface source
selected by the user) option is selected)
UPHOLE (or another header The time of arrival of the wave from the source in the well to the receiver
selected by the user) located at the wellhead (ignored if the surface source option is selected)
As already mentioned, several steps are needed to calculate the static corrections using the refracted
waves:
First breaks picking is carried out using the First Breaks Picking module. Before this, we recommend
performing the amplitude correction procedure, so that the picking is carried out correctly. All the defects
of the automatic picking are removed in manual mode using the Screen Display. The picking must be
saved to one of the headers of the input data set, for example, FBPICK.
An example of flow for picking the first breaks. Before picking the first breaks, perform an amplitude correction
to determine the first breaks in the automatic mode more correctly.
For the algorithm to calculate the static corrections on refracted waves data, it is necessary to place
layered NSS model to the input so that it is used as the first approximation for calculations. For this
purpose it is necessary to study different CDP super-seismograms, define how many refractors can be
selected on the basis of the first break in data, and after that to pick them, thus having set velocity model
in several points of SCDP areally. After that, this model will be interpolated with respect to the whole
area in the Refraction Statics module.
WARNING! The Refraction Static module supports maximum three refractors. Please
note that in the majority of cases this quantity is enough to describe starting NSS model with
acceptable accuracy.
After the first breaks picking is done at the previous stage, it is necessary to form CDP super-
seismograms using the Super Gather module (SCDP-OFFSET sorting) and then pick all refractors on
them.
WARNING! The SuperGather module assigns SCDP number in the following way:
scdp=xline_no*10000+iline_no (according to headers of the central point of CDP on a super-
seismogram). The SCDP super-seismogram number must be the same as the central CDP
number of this super-seismogram.
This means that the CDP numbers must be calculated according to the same formula:
cdp=xline_no*10000+iline_no . In case of 3D binning by means of RadExPro, CDP numbering is
calculated exactly this way.
Picking of refractors is a selection of areas on SCDP seismograms with certain velocity and offsets. Each
refractor must be set by two points: the beginning and the end of a segment (on a figure below, there is
an example of picking of two refractors). Picks must be done with the Variable spacing option enabled.
Pay attention not to pick a direct wave.
For instance, if you suggest that the number of refractors in NSS is three, then there should be exactly 6
points on every super-seismogram which you will pick. If any of the refractors is missing on some of
the super-seismograms, there still must be 2 points related to it. Place them as close to each other as
possible.
• input dataset
• picking of refractors, which was created and saved into data base at stage 2
• indicate header containing the first breaks obtained at stage 1
• V0 Velocity
• Replacement velocity
• Datum
• Number of algorithm iterations
After the calculation of static corrections, they can be applied to the traces using the Apply Statics
module.
Below you can find an example of initial stack (without statics) and the same stack obtained with
applied static corrections calculated by the Refraction Statics module:
• Dataset – a path to input dataset, in which all headers necessary for work of the module are filled
out (see table in the beginning of module description).
• Refraction offsets/velocities – indicate picking in the data base (matched with the SCDP :
OFFSET headers), which describes the refractors for super-CDP points. Each refractor in super-
CDP point is set by two picking points. Total number of refractors can’t exceed three.
• First breaks – parameters describing the first breaks:
▪ Pick field – a header containing the picking value of the first break for this trace.
▪ Max difference from refraction – maximum allowed difference between the time of
first break (Pick field) and the time of a refractor for this trace (Refraction
offset/velocities). If the trace does not meet a condition, it shall be excluded from the
static correction calculation.
• Weathering velocity (V0) – parameter determines the velocity in the first NSS layer. The
velocity may be calculated using several methods:
▪ Compute – calculate velocity in the first NSS layer. For this purpose, state a source depth
(Source depth) and time of arrival of a seismic event to receiver located near a wellhead
(Uphole time) in a header.
▪ Specify – manually set the velocity in a table for several CDP points. Here you are also
to state a source depth (Source depth) or time of arrival of a seismic event to a receiver
located near the wellhead (Uphole time) in a header. You can do this by specifying
headers containing their values.
You can set known velocities in the table pressing a button Edit velocity table…:
▪ Surface source – if a source was at the surface and not in the well, it will be sufficient to
indicate values for CDP in the table.
Note! Values V0 are interpolated for the area between CDP points with set velocities for
each trace.
Smooth 1st refractor – a group of parameters for optional smoothing of the 1st refractor. Smoothing is
performed using α-trimmed mean in a set rectangular window by inlines and crosslines.
• Elevation – smoothing of absolute value which indicates elevation of NSS first layer base over
the sea level. It is recommended to use this in case of considerable relief level changes.
• Depth – smoothing of value which indicates depth of NSS first layer base from surface.
• Window size – size of a window of NSS first layer base smoothing.
• Rejection percent – mean counting value is taken from a set range excluding a set percentage
of the smallest and the largest values.
• Source statics – header in which static correction value for source will be recorded.
• Receiver statics – header in which static correction value for receiver will be recorded
• V0 – header in which corrected velocity values of the first layer will be recorded.
Depths and velocities of refractors may also be saved for additional control, where necessary.
The module is designed for calculation of static corrections on the basis of the first breaks of refracted
waves.
UPHOLE (or other header chosen Time of wave arrival from the source in a well to the receiver located by
by the user) the collar of a well (is ignored upon setting V0 in case of surface source
option selection)
The process of calculation of static corrections is subdivided into stages; graphic interface of the
module will successively lead you through them.
Stage 0. LMO trend picking is a preparatory stage. At this point, picking of a common linear
kinematic trend (LMO trend) takes place. Further work will be carried out with a small fragment
of initial data in the area of the first breaks cut along the trend.
Stage 1. First breaks Picking is basically obtaining the first breaks picking. We work with a small
fragment of initial data with linear moveout applied. Before the search for the first breaks you
can perform a simple processing of a fragment in order to level amplitudes and lower noise level.
Several algorithms of automatic search for the first breaks are implemented. The obtained
picking can also be corrected manually. Here you can also check the quality of the obtained
picking on time-offset crossplots and using maps of “times of the first breaks — LMO trend”
attribute.
Stage 2. Branch assignment is a stage of refractors analysis. Picking of refracted waves is
performed interactively on the basis of point cloud of the first breaks on a time-offset crossplot.
In the process of refractors picking it is possible to look through automatically updated velocity
maps, minimum/maximum offsets for each of refractors and other attributes.
Step 3. Model – calculation of velocity-depth model of the upper part of the section for all CDP
points takes place upon transition to this stage. Here you can see depth-velocity maps of each
refraction boundary and assess quality of the calculated model of the upper part of the section.
There is a possibility to smooth the surface of the first refractor.
Stage 4. Statics – calculation of static corrections for each SP and RP takes place during transition
to this stage. The calculation is carried out based on the assumption of vertical ray propagation
in the upper part of the section, the velocity-depth model of which was obtained at Stage No. 3.
Here you can see obtained static corrections on maps, compare them with topographic maps and
other attributes. You can also see initial seismograms of CSP, CRP and CDP and preliminary apply
obtained corrections to them to assess the quality of the obtained result.
Dialogue box, where it is necessary to indicate a dataset for which static corrections will be calculated, as well
as database object Scheme which will store all information on fulfilled stages.
Dialogue box with parameters will be opened upon adding the module to the flow. Indicate a Dataset,
for which calculation of static corrections will be performed, as well as a “scheme” (Scheme). Scheme
is an object of a database where the module stores internal parameters and service information.
When you start calculating static corrections, you need to create a scheme in a database. After that results
of each stage (LMO trend, first breaks picking, results of refractors analysis, as well as all parameters of
stages) will be saved in this scheme. This allows returning to static corrections calculation module at
any moment and continue work from the very place you stopped without repeating all stages.
After indicating Dataset and Scheme press OK and launch the flow. Main work window with interactive
module will be opened.
The main window of the module: tabs of stages, Next/Back buttons and toolbar of the current stage.
To go to the next stage it is necessary to complete work with the previous one. It is always possible to
get back to previous stages from next stages.
WARNING! In case of change of results or parameters of any of the previous stages, work results of all
next stages will be deleted and they'll need to be determined all over again.
Stages parameters
All stages (apart from the first breaks picking stage) have their own specific parameters. Before the first
transit to the next stage a dialogue box will appear where you will be able to set these parameters. Then,
during work, parameters of the current stage can always be changed by pressing button on a toolbar
of the stage tab.
WARNING! After changing parameters of the stage current work results at this stage and all next stages
will be deleted.
Stage No. 0. LMO trend picking – Linear kinematic trend picking
Upon the first launch of Stage No.0 in LMO trend picking dialogue box it is necessary to set velocity
in the upper layer of the upper part of the section Weathering velocity (V0) (LVL velocity) and super
CDP ensemble forming parameters (SCDP ensembles), on the basis of which linear kinematic picking
(LMO trend) will be performed.
Dialogue box for setting of parameters of Stage No.0. The user needs to set LVL velocity (Weathering
velocity(VO)), as well as a step between SCDP ensembles and a base for their forming, using one of the
methods.
The parameters of the Weathering velocity (V0) group are responsible for the way the velocities in the
low-velocity layer (LVL) will be determined. The LVL velocity can be calculated in several ways:
▪ Compute – a mode in which the LVL velocity is calculated according to uphole times. To
calculate, select the headers in which the source depth and uphole time are recorded.
▪ Specify – the LVL velocity (V0) is set manually in the table for several CDP points. The table is
called up by pressing the Edit velocity table... button. Next, the values of V0 are interpolated
over the area. The source depth can be set explicitly, through the header with the source depth
values (Source depth) or recounted from the uphole times (Uphole time), also specified in the
corresponding header.
▪ Surface source– if the source was on the surface, and not in the well, the LVL velocity (V0) is
set manually in the table for several CDP points. The table is called up by pressing the Edit
velocity table... button. Next, the values of V0 are interpolated over the area.
The LMO trend is picked according to superseismograms (SCDP), combining several neighbouring
CDPs. Super Gather parameters determine where and how superseismograms will be formed:
After setting the parameters press OK. A CDP map appears in the work window with the SCDP points
marked on it. To select an ensemble for picking an LMO trend, left-click on the necessary SCDP point,
a window with the corresponding superseismogram opens.
The SCDP map shows points for which SCDP ensembles were formed (black circles on the example
below), as well as CDP points (green points on the example below).
SCDP map. SCDP points are shown in black, and CDP points are shown in green. The interface of the window
is determined by its parameters.
Map visualization parameters. Here you can specify the size of the points, their color,
transparency, color the CDP points depending on the selected attribute (for example, CDP
fold), change the properties of the axes, etc. A detailed description of the map parameters
can be found in the description of the Interactive QC module.
Setting the same zoom on both axes of the map (Set axis ratio 1:1)
If necessary, you can change the base of a separate existing SCDP point. To do this, right-click on it and
select Change SCDP command from the context menu:
Changing the base of the SCDP point selected on the map
A dialog will open in which you can change the base for the formation of the selected superseismogram:
Moreover, you can create additional SCDP points outside the original grid. To do this, right-click on the
CDP point around which you want to form a new superseismogram and select Gather SCDP around
this CDP in the context menu:
A dialog box appears in which you need to specify the base for the formation of a new SCDP ensemble:
Dialog box for the formation of the base of the selected SCDP
With pressing the OK button, a new SCDP point will appear on the map, and the LMO trend can also
be set.
Blue point – the base is set individually, the LMO trend is not set
Pink point – the base is set individually, the LMO trend is set
The SCDP superseismogram is displayed sorted by source-receiver offsets. The position of each trace
on the screen is calculated in proportion to the offset (the value of its OFFSET header), therefore, the
observed breaks of the direct and refracted waves appear on the seismogram as straight lines.
The picking of the LMO trend (linear kinematics trend or first breaks trend) is made in this window.
The green dashed line is the moveout curve of a direct wave; its slope corresponds to the apparent
velocity V0 at a given point. Thus, it is the trend of first breaks for the nearest offsets.
SCDP ensemble seismogram. The green dashed line corresponds to the velocity of the direct wave (V0, LVL
velocity). LMO trend is picked in this window.
Seismogram visualization parameters. Here you can specify the display mode, color palette, axis
parameters, etc. A detailed description of the seismogram visualization window parameters can be found
in the description of the Interactive QC module.
Turn on/off the mode that allows you to manually change the velocity V0 for this SCDP.
Click on the button on the toolbar to enable the picking mode of the LMO trend (the first breaks
trend). The trend is set by several points, between them the values of the kinematic corrections are
linearly interpolated, at the edges they are linearly extrapolated – to the left to the intersection with the
V0 line, to the right – to the trace with the maximum offset value. The extrapolated parts of the trend are
The LMO trend editing commands are similar to the basic picker commands in the Screen Display
module:
• To move a point, capture it with the right mouse button and drag it to a new location.
Let us remember, that picking the trend of linear kinematics is a preparatory stage before picking the
first breaks. A window for cutting traces and a window for automatic searching for the first breaks are
set regarding the LMO trend. Press to see the current position of these windows on the screen.
Window for picking the first breaks trend (LMO trend). The area where Stage No.1 will automatically search
for the first breaks is highlighted in orange. Purple dashed lines are the boundaries of the seismogram area that
will be transmitted to Stage No.1.
The boundaries of the window for cutting are displayed with purple dashed lines. The window is located
symmetrically relative to the trend, its width is determined by the parameter Trace trimming width,
ms.
The first breaks search work window is highlighted in orange. Separately, the window width is set under
(Auto-picker width below LMO trend, ms) and above (Auto-picker width above LMO trend, ms)
Parameters that define the boundaries of the windows are set on the toolbar of the stage tab:
applied.
After you have finished working with the current superseismogram, you can go to the next one either by
selecting it on the map or by using the arrows on the toolbar of the window with the seismogram.
The SCDP points where the trend of the first breaks was set are displayed in red on the map.
SCDP map. Red points indicate SCDP ensembles for which the first breaks trend was picked.
After you set the first breaks trend, press button to go to the next stage – the stage
• CDP map
In addition to the stage parameters key, there are the following keys in the tool bar of the stage tab:
- visualization of pickings of the first breaks on the QC crossplot (see the section Picking Quality
Control)
Auto-picker width below LMO trend, ms – width of the window for searching for the first breaks
QC Area size, m – side size of the square control area (see section Picking Quality Control).
Map Windows
A tool bar in the map windows is the same as at the previous stage.
Pressing the key you may assign colours to the points of any map subject to the attribute selected
Attribute values will be averaged over all the traces of ensemble (CSP, CRP or CDP depending on the
map type).
The following attributes are available before the start of work at stage 1: <LMO trend> (values of linear
kinematic trend averaged over the ensemble), <V0> (LVL velocity) and ensemble fold (depending on
the map type: <CDP FOLD> is CDP fold, <RECEIVER_FOLD> is CRP fold, <SOURCE_FOLD>
is CSP fold. Here, fold means the number of traces in a corresponding seismogram).
CRP map coloured depending on the value of attribute "CRP Fold", i.e. the number of traces in CRP
seismogram.
You may open additional maps using keys of the tool bar of the stage tab, if necessary.
Seismogram Window
Left click on a point of one of the maps (SP, RP, CDP) will open a corresponding seismogram of CSP,
CRP or CDP. A seismogram includes a fragment of data around the first break trend with applied linear
In this case, points included in the seismogram selected will be highlighted in all maps (for example, if
CSP seismogram is opened an active receiver array will be highlighted in RP map, and CDP points
Moving a cursor over the seismogram window will also highlight the SP, RP and CDP points which
belong to the trace under the cursor
In addition to standard keys of settings, zoom, amplification, etc., a seismogram window tool bar
Setting of seismogram preprocessing parameters before first breaks picking. Pressing this
key will open a dialogue box allowing to set the simplest seismogram processing sequence,
adjust parameters of procedures and apply the sequence (see section Seismogram
Preprocessing)
Seismogram Preprocessing
Press the key on the tool bar of the seismogram window to process seismograms in the Interactive
Refraction Statics module. The window Processing Parameters divided in two sections will be opened.
The left section will show the list of available procedures (Available), the right section will show the
Parameters of procedures are detailed in the description of corresponding modules. If you want to add
the module from the list Available to the flow, left double click on it or select it in the list and press .
The dialogue box of module parameters will appear. After setting the parameters and pressing OK, the
The list Selected is a standard RadExPro flow editor and supports all its functions: double-click settings,
notation of modules, dragging, copying, deleting.
After a flow is set it may be applied to the current ensemble displayed or to all the ensembles. It is
recommended to set the flow at one ensemble first and then apply it to all ensembles. Further, you may
separately re-process particular ensembles, if necessary.
To process the current ensemble press the key Filter in the field Current ensemble. If you are satisfied
with the result you may process all the dataset traces pressing the key Filter in the field All ensembles.
To set autopicking parameters, press the key in the tool bar of the seismogram window. The
Auto-picker parameters are in general the same as the parameters in the First Breaks Picking module
and you may find their detailed description in the description of this module. The only difference is that
window of searching the first breaks in the Interactive Refraction Statics module is determined by the
parameters Auto-picker width below LMO trend, ms and Auto-picker width above LMO trend,
ms in the tool bar of a stage tab in the top of screen.
Autopicking of first breaks may be performed for the current ensemble displayed or for all the ensembles
of the dataset. It is recommended to set the picker parameters at one ensemble first and then apply it to
all ensembles. Further, you may perform picking for separate ensembles repeatedly with individual
parameters, if necessary.
To perform picking of the current ensemble press the key Pick in the field Current ensemble. If you
are satisfied with the result you may perform autopicking at all the dataset traces pressing the key Pick
in the field All ensembles. To delete current picking press the key Remove pick.
CDP seismogram with picked first breaks.
The resulted pickings may be corrected manually. To do this press in the tool bar. The window
with a list of pickings will appear showing only one picking First breaks.
Then the picking editing procedure will be the same as in the modules Screen Display and Seismic
Display. The results of manual editing of picking will be saved automatically.
Picking quality control
After first breaks picking is performed the following additional attributes will be available in maps:
<First breaks>, i.e. values of first breaks picking, and <First breaks – LMO trend> showing how far
picked first breaks are away from the initial linear kinematic trend.
It is worth reminding that attribute values are averaged over ensembles subject to the type of maps that's
why the results may be analyzed in different sortings and compared with topographic data, for example.
From left to right: SP and RP maps coloured according to the attribute <First breaks – LMO trend> and RP map
coloured according to RP elevations (values REC_ELEV)
In addition to maps, you may also analyze picking values by the area in the time-offset QC-crossplot.
For this purpose, press in the tool bar in the stage tab in the top left corner of screen and left click
on any of the maps. The crossplot OFFSET vs. First breaks will appear in the active window, and the
times of first breaks will be displayed there depending on SP–RP offset for all points of a map which are
included in the square control area (its boundaries are shown as blue square or rectangle depending on
the map aspect ratio).
Receiver map with a rectangular control area. On the right, there is a crossplot corresponding to it.
A crossplot may be drawn using maps of any type. Sizes of side of the control area, which traces will be
considered upon crossplotting, are determined through the parameter QC Area size, m in the tool bar of
the stage tab.
Crossplot points may also be coloured with one of the attributes, for example, on the basis of their
distance from LMO trend:
Dragging the control area along the plot and analyzing picking through different types of ensembles,
you may quickly and effectively assess its quality and change the parameters of auto-picker and/or pre-
processing, if necessary.
The crossplot is synchronized with maps so when a cursor is moved over points at the crossplot SP, RP
Besides, if you click a point on the crossplot a seismogram with the trace, to which the point belongs,
will be opened. The type of seismogram to be opened by default depends on the type of map from which
the crossplot was called: for example, if it is called from RP map CRP seismogram will be opened, if it
is called from SP map CSP seismogram will be opened. If it is required to open a seismogram of another
type you may right click on a point and select a seismogram type from a context menu.
For each crossplot point it is possible to open a seismogram with the trace to which a point belongs.
In this case all points belonging to traces of the opened seismogram will be highlighted:
This allows to understand whether outbreaks in a crossplot are connected with particular SP or RP and,
The program allows to pick maximum three refractors. The number of refractors must be constant by
area.
The cloud of first break points is analyzed using CDP superseismograms (SCDP ensembles). Before
the start of this stage a dialogue box with its parameters will appear. Indicate a step between SCDP
ensembles by inlines and crosslines (Iline step, Xline step) and a base that is a number of inlines and
crosslines to be included in each ensemble (Iline base, Xline base).
Dialogue box for indication of parameters of Stage No.2. You are to indicate a step between SCDP ensembles
and a base on which ensembles will be generated.
After pressing OK a stage tab will be opened. In addition to the stage parameters key, there are the
following keys in the tool bar of the stage tab:
A CDP map with highlighted positions of SCDP ensembles in the uniform grid will appear by default in
the stage tab. (As at stage 0, here, if necessary, you may edit the parameters of a SCDP particular point
or set an additional point with the centre in any CDP point, see Editing parameters of SCDP particular
points.) Besides, one more CDP map will be opened which you may colour subject to the value of any
attribute in the same way as at the previous stage.
At this stage, velocities of each of 3 possible refractors (1st branch velocity [m/s], 2nd branch velocity
[m/s], …), their minimal and maximal offsets (1st branch start offset [m]/1st branch end offset
[m],…) and their times corresponding to minimal offsets (1st branch start time [ms],…) will be
displayed in the list of available attributes. They may be used for consistency control of picking refracted
waves by area.
Using the key you may create as many CDP maps as there are attributes you consider necessary
to control in the process of work. All maps will be automatically updated upon movement between SCDP
points. If you want to update them before moving to the next point, press .
Refractor Picking
Left click on a SCDP point on the map. The time vs offset crossplot will be opened with the cloud of
first break points where you may pick refractors, i.e. define the velocities of refracted waves and their
respective offset ranges.
SCDP point and its corresponding crossplot with the cloud of first break points
If required, it is possible to assign colour to first break points based on values of any attribute pressing
the crossplot parameters key. For example, in the figure below, crossplot points are coloured depending
on their offset from LMO trend:
Change of the colour of crossplot points depending on an attribute set in the visualization parameter box.
It is known that moveout curves of refracted waves are straight lines. Therefore, we are to pick linear
segments in a crossplot. The program allows to pick maximum 3 refractors and their number by area
must be constant. Therefore, it is worth looking through the area and determine the best number of
refractors before starting work. Thus, in the example above, either 2 or 3 refractors may be picked.
WARNING! The number of refractors that is constant by area is set by the first SCDP point at which
they were picked. For example, if you set 2 refractors at the first point, the program will require to set
2 refractors at all next points. If in the process of work you understand that it is necessary to change the
number of refractors, you will have to delete the results of work at this stage pressing the key in
the stage tool bar and then start picking them again.
A straight blue line above the first breaks cloud is a theoretical moveout curve of a refracted wave. You
may apply it to the first breaks to assess the velocities observed in the first breaks of refracted waves.
Left click on the left boundary of the interval to which you want to apply a theoretical moveout curve
and then right click on the right boundary of the interval. The velocity corresponding to the current
inclination of the moveout curve will be displayed in the top left corner of the crossplot.
Apply the theoretical moveout curve to the point interval, which, as you think, corresponds to a refracted
wave, and try to reach the maximum match of inclinations. Then, press the key Set in the table List of
branches under the crossplot. Velocity values and offset range for the first refractor will be saved in the
table. If you want to delete velocity and offset values for the current refractor press the key Clear.
Picking a refractor by a crossplot. Left click on the beginning of interval and right click on the end of interval to
pick a refractor. When the inclination of a blue line matches the inclination of the point cloud press the key Set
to save velocity and offset values for this refractor. The interval corresponding to the 1st refractor will be
highlighted green on the crossplot.
You may pick all the refractors for this SCDP in the same manner:
Picking of two refractors over first breaks crossplot.
WARNING! Please note that only values of first breaks appeared in the offset range of one of the picked
refractors will be used for calculation of the velocity model from which static corrections will be
obtained. This means if the cloud of points disintegrates or a velocity inversion is observed at some
offsets refracted waves should not be picked at such segments otherwise the quality of results will
become worse.
In many cases, it appears convenient to introduce linear kinematic in first breaks to make the picked
segment horizontal. To do this, check the box Apply LMO and set a velocity at which the segment
concerned will become horizontal using a thumb. This means that the velocity with which LMO
correction is introduced is equal to the refractor velocity. The interval corresponding to the refractor with
a higher velocity will have a negative inclination (to the right) as it will be over-corrected, and the
interval corresponding to the slower refractor will have a positive inclination (to the left) as it will be
under-corrected.
After that apply a theoretical moveout curve to the horizontal interval by left and right click and press
the fey Set in the table in order to save refractor parameters.
Application of LMO correction with the velocity of 2143 m/s corresponding to the first refractor. Correction
application ensures more accurate refractor speed setting.
The range of velocities of a linear kinematic thumb is defined by the parameters Min: and Max: under
it.
SCDP points for which refractors were picked will be coloured red.
SCDP map. Red points are SCDPs for which refractors were picked, black points are SCDPs for which
refractors were not picked.
If when picking refractors you create maps of any attributes related to them all maps will be
automatically updated after each movement between SCDP points.
A dialogue box with stage parameters will be opened upon transition to this stage:
Maximum difference from first break to branch point above branch line – maximum deviation of
first break picking from the refractor line above the line
Maximum difference from first break to branch point below branch line – maximum deviation of
first break picking from the refractor line under the line
During calculation of the depth-velocity model, the algorithm will not include the values of first breaks
picking which differ from the picked refractor upward or downward for more than indicated values. This
is necessary to exclude random outbreaks which may produce a significant impact on solution.
First breaks occurred under a dotted line will not be taken into account upon calculation of the depth-velocity
model.
After the key OK is pressed, the work window of Stage No.3 will appear with maps of depths and
Designed depth-velocity model. Maps of depths and velocities of each of refraction boundaries picked will be
shown in the window after transition to Stage No. 3.
You may open additional CDP maps for joint analysis of any other attributes pressing the key on a tool
bar. In addition to depth-velocity maps which are opened automatically, you may also see the
map of absolute elevation of refraction boundaries above the sea level at this stage (1st refractor
elevation [m], 2nd refractor elevation [m], …). Besides, all attributes from the previous stages and
trace headers are still available.
Upon design of the depth-velocity model the depth of the first refraction boundary is calculated using
LVL velocity V0 which was set by a user. Since the velocity is fixed upon calculation there may be a
situation when the depth to the first refraction boundary differs greatly for two nearly points, which is
not geological in most cases. Therefore, in practice, it is often recommended to smooth the surface of
the first refraction boundary and re-calculate the velocity V0, accordingly, to keep the model consistent.
In order to smooth the surface of the first refraction boundary in the model press the key on the
stage tool bar. The dialogue box of smoothing parameters will appear:
The first method is recommended when the daylight surface is high-relief while a refraction boundary
is expected to be subhorizontal. The second method is more preferable when a refraction boundary is
Smoothing is performed using a rectangular window. The size of window is set through parameters
You may set the percent of rejection Rejection percent. The algorithm will sort all the values captured
by the averaging window in increasing and delete the set % of the largest and smallest values, the rest
After setting the smoothing parameters press the key Smooth. The surface of the first refraction
boundary will be smoothed, values V0 and depths of next refraction boundaries will be re-calculated.
Stage No.4. Statics – Static Corrections
This is a stage of calculation of static corrections based on the model obtained at Stage No.3. A dialogue
Indicate the replacement velocity (Replacement velocity) in m/s and the datum plane (Datum) to
calculate corrections. The datum plane may be specified as a constant (Constant datum) or you may use
Maps of static corrections calculated for SP and RP and a map of common static corrections averaged
other attributes using keys on the stage tool bar. For example, you may compare the
Joint analysis of calculated static corrections and a map of selected attribute (in the above figure with a relief
map).
You may left click on a point on any of maps and the initial seismogram of input dataset corresponding
to the map type (CSP, CRP or CDP) will be opened. The seismogram is displayed sorted by source-
receiver offsets by default. The position of each trace on the screen is calculated in proportion to its
In addition to standard keys, there is a key on the tool bar of seismogram window at this stage.
Pressing it you may apply calculated static corrections to traces and preview the result.
CSP seismogram before applying the static corrections.
Saving Results
To save static corrections in headers press the key on the stage tool bar.
After that, a window will appear where you may select headers, in which calculated static corrections
Window for recording static corrections and parameters of a calculated model in trace headers
Refraction group of parameters – here you may save the parameters of velocity model for selected
refraction boundaries. Check the refraction boundaries which parameters you would like to save (#1,
#2, #3). Then select respective headers for each saved boundary:
• Elevation — select a header in which to record elevations of the refraction boundary surface
• Depth – select a header in which to record depths from the daylight surface to the refraction
boundary.
• Velocity – indicate a header in which to record the velocity at the refraction boundary.
Weathering velocity group. If you need to save final LVL velocity values (V0) check this box and
Operation principle
In fact, the algorithm is the calculation and application of time varying correlation static corrections to
ensembles of traces.
Several windows for calculating the static corrections are set in the module parameters. In each window
there сross-correlation of each trace with the pilot trace is calculated and maximum of this function is
determined. This static shift value is assigned to the center of the window. Between the centers of the
windows, the shift values are linearly interpolated. Outside the specified windows (above the center of
the first window and below the center of the last window), the shift values, depending on the selected
parameters, are either extrapolated by constants, or linearly interpolated so that the shift of the first and
last samples of the trace is equal to zero. The shift values obtained in this way for each sample are applied
to the samples of the original trace.
The sum of all the traces of the ensemble is used as a pilot trace by default. You can also use an external
pilot trace calculated separately. In this case, it is assumed that the pilot trace is the first trace of each
ensemble (it is expected that you have delivered the calculated pilot traces to the module input along
with the data, providing the necessary sorting).
The module can combine several ensembles following each other into one super ensemble, and calculate
one pilot trace for a super ensemble. The number of initial ensembles in the super ensemble is determined
by the Smash parameter. In the operation mode with an external pilot trace, the pilot traces of all
ensembles included in the super ensemble are averaged.
NOTE: when using the module in the Frame Mode the use of the Honor Ensemble boundaries is
mandatory.
Module parameters
Time window – a group of parameters responsible for setting time windows:
Add horizon – add the horizon from which the window will be calculated:
• Horizon – a header containing the time relative to which the position of the window for each
trace is calculated.
• Window length, ms – the window length in ms.
• Window position – the window position:
▪ Above – the window is counted up from the specified horizon
▪ Centered – the horizon is the center of the window
▪ Below - the window is counted down from the horizon.
Smash – the size of the super ensemble, i. e. the number of ensembles that take part in the calculation
of the pilot trace. The pilot trace is calculated as the average of all the traces of all ensembles.
Use external pilot trace – if you enable this check box, the first trace of the ensemble will be used as
the pilot trace of the ensemble.
Taper shifts to zero – when the check box is enabled, the shifts of the first and last samples of each
trace are assumed to be zero, the values of the shifts above the center of the first window and below the
center of the last window are linearly interpolated to them. If the check box is disabled, the constant shift
values obtained for these windows are used above the center of the first window and below the center of
the last window.
Output pilot trace – output the pilot trace in the output ensembles. The values will be written to the
headers of the pilot trace TRC_TYPE = -1, CHAN = -1,
Pilot offsets limits – restrictions on source and receiver deletions when calculating the pilot trace.
NOTE: If both of these parameters are equal to zero at the same time, this means that there are no
restrictions, all the traces of the ensemble will be used when calculating the pilot trace.
Number of threads – the number of parallel calculation threads created when the module is running. If
the default value is 0, the number of calculated threads will be equal to the number of logical CPU cores.
Velocity
Time/Depth Conversion
The module converts the data from time to depth domain and vice versa using a specified velocity model.
Velocities can be either RMS or interval, defined in either time or depth domain. Conversion can be
based on either RMS or average velocities.
The velocity model is defined at specific CDP's and is interpolated in between of them. Below the
deepest point where velocity is explicitly defined, the RMS/average velocities are calculated basing on
the assumption that the interval velocity remain unchanged.
Parameters
Conversion direction – specify the direction of the conversion:
• Depth integration – The depth of each sample (i) is computed as the depth of the previous sample
plus the depth increment at this sample, calculated as:
D(i) = D(i-1)+(dt/2)*Vinterval(i)
The interval velocities are defined at the bottom of each interval (layer), i.e., at the points where
the initial velocity function was picked. They are considered constant within each layer.
• Using average velocities with linear interpolation – The depth of each sample is computed as its
time multiplied by its average velocity. Average velocities are defined as the depth of the
boundary divided by the reflection travel-time. They are calculated from the interval velocities
at the bottom of each interval (layer). Between the layer boundaries, the average velocity values
are linearly interpolated. This method would be equivalent to depth integration if the interval
velocities were initially defined at each sample.
• Using RMS velocities with linear interpolation – The depth of each sample is computed as its
time multiplied by the corresponding RMS velocity. RMS velocities are initially defined at
reflected boundaries only, where the velocity function was picked. Between the boundaries, the
RMS velocity values are linearly interpolated. Using RMS velocities directly for time-to-depth
conversion is not precisely correct, though sometimes used. We keep this method of conversion
mainly for backward compatibility only.
Destination range – specify the maximum depth or time of the resulting traces after the conversion,
either in m or in ms depending on the output domain;
Destination sample interval – sample interval of the resulting traces after the conversion, either in m
or in ms depending on the output domain;
Use coordinate-based interpolation – as it has been mentioned, velocities are defined at certain
reference CDP's. Between those reference CDP's they are linearly interpolated. When the option is off,
the interpolation is based on CDP indexes. Otherwise, interpolation is made according to the CDP
coordinates stored in CDP_X and CDP_Y header fields of each trace.
Output velocity traces – when the option is on the resulting traces in the output domain will be
substituted by the interpolated values of either RMS velocities or average velocities, whichever are to
be used for the conversion depending on the Use average velocities flag on the next tab.
The formulas used for velocity scaling in case of two-interval model are shown below (for Ninterval
model recursive approach is used):
Here vэф — RMS velocity, h1, h2 — thicknesses of the intervals, v1, v2 – interval velocities.
The file is ASCII tabulated, either space or tab separated. First line contains data type: 2D or 3D. Then,
there are columns containing CDP numbers, times (or depths), velocities, and (optionally) X and Y
coordinates. Velocities can be either RMS measured at the specified time (depth) or interval
corresponding to the interval above the specified time (depth) until the previous time (depth). The
examples are below:
Here the model type is 2D, the columns are: CDP, time (or depth), velocity.
Here the model type is 3D, the columns are: CDP, time (or depth), velocity, X-coordinate, Y-coordinate.
DB Velocity Interpolation
This module is meant for velocity field visualization together with CDP section added to it. When this
module is activated the following window appears:
Parameters
In the Database-Pics field, click the Browse... button and, in the window that opens, select the file with
required velocities from the database. A value of the point displayed on the screen will equal the sum of
velocity values in this point and trace amplitude in this point multiplied by normalizing factor.
• None – zero normalizing factor. In the running window only velocity field will be displayed,
• Mean – in the running window the seismic section normalized by mean value will be added to
velocity field,
• RMS – in the running window the seismic section normalized by root-mean-square value will
be added to velocity field.
Additional scalar the additional scalar on, which the trace samples values will be multiplied before
being displayed on the screen, should be specified.
1. Trace Input
2. DB Velocity Interpolation
3. Screen Display
4. Trace Input reads out a summed section saved in the database.
5. DB Velocity Interpolation - velocity field interpolation module itself.
6. Screen Display displays velocity field with superposed section on the screen.
HVA Semblance (Horizon Velocity Analysis Semblance)
This module is meant for velocity field calculation from the specified horizon. The traces input to the
model are to be sorted by CDP and without normal move-out correction.
When this module is activated the window containing two tabs: Semblance and Horizon appears.
Parameters
Semblance tab parameters:
The Mean and RMS options are meant for semblance calculation:
• Pick in database – specify horizon from the database. To do this, activate the option, click the
Select... button and, in the opened dialog box, select the required project database object.
• Trace header – specify horizon from the header in which it has been saved before. To do this,
activated this option and click the Browse... button, then select the required header in the
opened dialog box.
• Specify – specify horizon manually in the Specify field.
The order is the following:
Example:
CDP
100:2250
2250 - time.
Save template and Load template buttons are meant for current module parameters saving in the
template in the project database and for saved parameters loading from previously saved template,
respectively.
1. Super Gather
2. Apply Statics
3. Bandpass Filtering
4. Amplitude Correction
5. HVA Semblance
6. Trace Output
7. Screen Display
The Super Gather module allows creation of the trace sets (super gathers) composed of several CDPs.
It is used for signal/noise ratio increasing thus enabling more accurate velocity field calculation.
This module allows to calculate and to apply normal moveout corrections to CDP trace samples by
means of linear interpolation with the help of velocity function.
When this module is activated the following window containing two tabs: Velocity and NMO appears:
Parameters
On the NMO tab, specify the normal moveout corrections calculation method:
• NMO - select this option if it is requited to apply normal moveout corrections to trace samples.
• NMI - select this option if it is requited to accomplish velocity inversion, i.e. to apply inverse
kinematic function to the seismograms to which the normal moveout corrections have already
been applied.
• Mute percent - muting parameter expressed in percents. The trace stretching after NMO
application is an unwanted but unavoidable result. It is required to set the muting parameter in
percents in order to mute all data that has stretched for more then indicated number of percents.
Modify velocity – NMO corrections can be applied with the velocity function, which differs from the
selected by certain value (in percents).
On the Velocity tab velocity function specification parameter are grouped:
• Activate the Single velocity function (constant velocity function) option and specify it manually.
The order of specification is the following:
time:velocity, time-time:velocity etc.
• Activate the Get from file option and specify velocity function from file. To do this, click the
Browse... button, select the desired file in the opened standard dialog box.
• Activate the Database picks option and specify velocity function previously saved in the project
database. To do this, click the Browse... button, select the desired database object in the opened
standard dialog box.
The module is designed for estimation/improvements of velocity profile along the given horizons.
Principle of operation
The user picks one or several horizons on a stack. Seismic gathers averaged on the given base are formed
from CDP gathers along each horizon in the given time window on the basis of the starting velocity
curve. Velocity spectrum is calculated from those seismic gathers in the indicated range of velocity
parameter variation and is visualized graphically. Further on the user can pick velocity value along the
selected horizon on the basis of velocity spectra, analyzing the changes in the appearance of the stack,
calculated on the basis of the current velocity function and changes in the appearance of seismic gathers,
corrected for normal moveout as well on the basis of the current velocity function. The acquired velocity
function can be saved to a database. Other horizons are processed in a similar way.
Input data
The input data for the module represent CDP gathers, CDP stack, starting velocity function (represented
in the form of vertical velocity picks or a uniform function) and a set of parameters. All data are input
from the database; nothing could be input from the flow or conveyed to the flow.
Output data
Data, which were generated as a result of module operation or adjusted horizontal velocity functions
along horizons can be saved to a database by the user during the module operation using the
corresponding menu items. A warning message appears when the program operation is finished and you
have any unsaved horizontal velocity functions.
Parameters
The parameters dialogue of the module includes several tabs. Figure represents the tab of main execution
parameters.
Dialogue of module parameters
Parameters Tab:
• Window size above horizon/ Window size below horizon. Sizes of time windows below and
above the horizon, within which the super gathers are generated, and the stack is calculated are
specified by two parameters – a part of window, situated above the horizon (Window size above
horizon) and a part of window below the horizon (Window size below horizon).
• Super gathering base. Number of gathers.
• Stack dataset (CDP stack for the same profile) Specified by a database object. A dataset should
contain the same number of CDPs as the dataset for which seismic gathers are specified. If there
is a misfit a user received a warning, and the module aborts the operation in off-normal mode.
• Gather dataset (CDP gathers for one profile). Specified by a database object, that contains
seismic data (dataset). All CDP seismic gathers that belong to the profile. A CDP header value
is considered to be a point number. Before using the data they must be sorted by CDP. If there is
a blank in numbers, due to the lack of data, the program will populate the blanks with empty
traces.
Velocity (Starting velocity function).Starting velocity function should be specified using one of the
following modes:
• Start V/ End V– start and end velocity for search (in km/s).
• Step V– velocity step.
• Time window– the samples of which are used in velocity spectrum generation.
Draw parameters tab includes parameter groups, that are related to visualization mode of data,
velocity function and semblance.
Using the tab you can specify visualization parameters of different data components, used in velocity
analysis:
• Stack (Stack)
• Initial velocity function (Model)
• Initial stack strip along the horizon selected by the user (Initial stack strip)-
• Stack strip along the horizon selected by the user, acquired during the module operation with
the current velocity function (Modif. stack strip)
• Velocity spectrum (Semblance)
• Super gathers (Super gathers)
• Horizontal axes, one for all elements (Horizontal axis).
Data:
You can specify Display mode and Norm mode, screen gain (Additional scalar) and Bias, the
parameters’ context is the same as in Screen Display.
• WT/VA displays traces using the wiggle trace mode/variable area mode
• WT displays traces using the wiggle trace mode
• VA displays traces using the variable area mode
• Gray displays traces using the variable density in grey palette mode
• R/B displays traces using the variable density in red-white-blue palette mode
• Custom displays traces using the variable density in palette, specified by the user
• Define button is available if the option Custom is on. It opens the dialogue window Custom
Palette.
Additional scalar (screen gain) – additional scalar for multiplying samples’ values before display.
Bias – bias of an average trace level from zero, that leads to change in black color level in case of the
variable area display. Positive value will cause a leftward shift of the trace zero line and an increase of
the blacken area under the curve. A negative value corresponds to a decrease of the blacken area under
the curve.
Vertical axis
Vertical axis display parameters can be specified in group Vertical axis for each element of velocity
analysis window individually. (Horizontal axis’ parameters are set simultaneously for all elements –
select Horizontal axis in the drop-down list at the top of the dialogue page)
Misc
Parameters dialogue offers selecting the Supergather display step during velocity analysis and
indicating whether disk buffering is needed for saving intermediate results when calculating velocity
spectra (Use disk buffering while processing). The lattermost mode is worthwhile if and only if the
data volume is big and the module can’t function without it. Otherwise, using this option will simply
slow-down the operations.
The module is standalone, i.e. it doesn’t receive seismic data from the flow, and it operates through a
database. At the same time the modes of representation and running don’t differ from ordinary modules.
Parameters are specified through parameters’ dialogue window, the module is run using the button Run
in the main application menu.
All necessary parameters should be specified in correct way; otherwise the module will not start
operation, displaying a message on a corresponding error.
Program configuration
The program by default consists of two working windows.
Input stack is displayed in this window. A user can perform horizon picking and select a horizon for
velocity analysis.
Fig. 4. Stack display (upper part) and initial velocity model (lower part) windows
This window is opened when running velocity analysis on selected horizon. It consists of four parts
(Fig. 5, from top to bottom).
All windows of velocity analysis along the horizon support changing the display scale; there are scroll
bars when required and display scrolling is allowed using scroll bars. All windows have joint
horizontal scale and joint horizontal scroll bar. The windows are synchronized between each other and
with the scale and scroll bar in horizontal direction. The windows initial and modified stacks are
synchronized in vertical direction as well.
Program control is carried out through menu elements, some of which are duplicated by toolbar
buttons. Some parameters can be changed after having run the program.
When you run the module first of all the main window of module is opened, which consists of two parts.
The upper part displays the stack; the lower part displays the initial velocity model. In the upper part
there is a horizontal scale on CDP, in each part of window there is a vertical time scale at the left side.
If required there are scroll bars, while the horizontal scroll bar, as well as horizontal scale is shared,
one for each half-window. Sub-windows and all three scales are synchronized, in the sense that the scale
change and scrolling through the screen is performed synchronically. You can perform zoom in/zoom
out and modify stack display and initial velocity model parameters.
You can generate/modify/save/load picks, modify velocity analysis elements’ display parameters in the
main window; as well velocity analysis window can be run from the main window.
The following menu items are reserved for performing these operations:
Parameters – displays visualization parameters, is completely identical with the tab Draw
parameters (see above) of the module parameters dialogue.
Picking – switch on/off the mode of horizon tracking (hot button “M”)
Mode – mode of horizon tracking (hot button “A”) – activates parameters’ setting dialogue of
manual/automatic tracking.
Hunt – automatic tracking of seismic events on the basis of correlation between neighboring traces.
Guide window length – window length, in which the cross correlation function between the neighboring
traces will be calculated; Max shift – is the maximum allowed shift of the event on the neighboring
traces; Local maximum level – the local maximum level with reference to the global extremum; Halt
Threshold – threshold of the global maximum of cross correlation function, the tracking is terminated
below this threshold.
Auto fill– tracking of phase maximum in the window between two specified points.
Save as – save current pick to a database, with the name indication (Ctrl+S hot key ).
Line style – specify the line style in the dialogue (S hot key )
After having chosen any horizon you can run a procedure of horizontal velocity analysis using the menu
item Run Analysis.
You are not allowed to select horizon, on which the velocity analysis is currently being performed, for
editing. In order to edit a horizon, on which velocity analysis is performed, you have to exit the mode of
velocity analysis and select another horizon for velocity analysis. After that, you may edit the horizon.
However, in general, you have to reshape averaged seismic gathers for subsequent velocity analysis
along this horizon, as well as re-calculate velocity spectra. After the user had finished editing the horizon
and exited the picking mode, it is possible select the horizon for velocity analysis.
After a successful velocity spectrum calculation for selected horizon it is displayed in special graphic
window. Vertical axis represents velocity axis, there is vertical scale and vertical scroll bar.
Operations zoom in, zoom out, scrolling are available.
This window displays current velocity function above semblance, as well as initial function, used for
semblance calculation.
The user can pick velocity function in the semblance display window.
The fourth window of velocity analysis on selected horizon displays NMO-corrected super gather set.
Each N-th super gather (N – is specified by the user in Parameters) is displayed. The traces are visualized
with an equal interval between each other and occupy all space available. Available display parameters:
selection of traces visualization mode (WT, VA, WTVA, color with palette) and gain.
Super gathers’ position is linked with their central CDP points and synchronized with the stacks and
semblance display window.
NMO correction is calculated on parallel traveltime curves on the basis of current velocity for the given
CDP. Editing velocity function, causes changed in velocity and subsequent NMO recalculation and
redrawing of the corresponding super gathers being displayed.
Calculation and display of modified stack strip. Initial stack strip display
Stacking on parallel traveltime curves is carried out on the basis of current velocity function for each
initial gather. It is performed within the time window along the horizon. Thus, a strip of modified stack
is acquired along the horizon. This strip is displayed in a separate window. If the user edits velocity
function, the stack traces of the CDPs being modified are stacked again using new velocity values.
Moreover a strip is displayed in the same time window along the horizon of the initial stack.
Fields TLIVE_S and TFULL_S are responsible for muting and contain time values of start and end of
"meaningful" samples in ms. Make sure, that they don’t contain trash.
Velocity values in Parameters dialogue are given in m/ms.
Flag “Use secondary storage while processing” specifies whether the working data set will be loaded
into a file during the analysis of the selected horizon, or the data will be loaded when recalculating
characteristics and dropped off immediately. In the second case (flag is specified) the processing
/recalculation time is greater, but you need less frame memory.
It makes sense to save the acquired horizontal velocity function to the database after velocity analysis
along the horizon (object HVT). There is a module HVT->VVT that helps to make a transformation of
a set of horizontal velocity functions into a vertical velocity field. As well there is a possibility to export
horizontal velocity function from the database to a text file, via the Database Manager.
HVT->VVT*
The module is designed to transform a set of horizontal velocity functions (acquired as a result of
Horizontal Velocity Analysis) into a vertical velocity field. The obtained velocities can be used later
for applying kinematic corrections and subsequent summation.
Parameters:
• Input horizontal velocity tables – selection one or several horizontal picks from the database
• Output vertical velocity table – selection an object from the database (vertical velocity pick),
to which the pick will be saved
• CDP step – CDP step for the vertical velocity picks.
• Zero time horizontal velocity table – velocity value at 0 time (optional parameter). The
recording order is as follows:
10:1500, 80-100:1530
So, here, the traces of CDP number 10 at time zero will have a velocity of 1500 m/s, and between
80 and 100, it will be 1530 m/s. The velocity values between them will be interpolated.
• Final time Horizontal velocity table – velocity value at time (Final time, [ms]) specified by
the user. The recording order is the same as for the "Zero time horizontal velocity table"
parameter.
Horizon Velocity Auto-picker
The module makes automatic horizon velocity analysis. It automatically determines a horizon velocity
table (HVT) along a specified horizon on a 2D line.
When you have several HVTs determined along main horizons, you can convert them altogether into a
conventional vertical velocity table (VVT) using HVT->VVT tool (available from the menu Tools/
HVT->VVT… of the main program window)
Inputs:
1) CDP gathers
2) Horizon pick (matched to traces by headers CDP:CDP)
3) Guide velocity (VVT) – used for partial NMO correction during the offset binning.
Algorithm
Supergathering
First, the module makes supergathers (SCDPs), combining several adjacent CDP gathers into one bigger
gather. The supergathers are made within a specified time window along the input horizon only (this is
controlled by Super Gathering parameters).
For each supergather, offset binning is performed (this step is controlled by Offset binning parameters).
For each offset bin, traces within the binning range are corrected for partial NMO to the center of the
bin using the input Guide velocity.
Corrected traces of each bit are stacked together. Each sample of the stack trace is independently
normalized to a number of non-zero values in the stack for this sample (zeroes are considered a result of
muting). Now we have one averaged trace for each offset bin.
As a result, all supergathers contain equal number of traces as defined by the offset binning parameters.
N.B.: Note that not appropriate offset binning parameters may result in empty traces within the
supergathers.
Calculation of semblances
For each supergather one semblance trace (function) is calculated. Each sample of this trace corresponds
to a certain stacking velocity within a specified range (this process is controlled by Semblance
parameters). A standard semblance function is calculated along a hyperbolic time vs. offset curve that
corresponds to each velocity. The calculation is made within a specified time window centered at the
horizon. The formula for semblance calculation within a time window was taken from Yilmaz, Öz (2001).
Seismic data analysis. Society of Exploration Geophysicists, Vol.1, p. 301.
Maximum velocity picking
Velocities are picked on semblances using Hunt autopicking algorithm that is trying to trace a maximum
using cross-correlations of neighboring traces. We use the same algorithm in Screen Display and
Seismic Display modules for semi-automatic horizon picking.
The Hunt algorithm requires a starting point indicating which particular event is to be traced. As this
module is not interactive, the starting point is determined as following. We take a specified number of
semblance traces from the center of the frame, stack them together, and find the absolute maximum on
this averaged semblance. The velocity of this maximum projected to the central semblance trace within
the frame is taken as a starting pick point.
On the central semblance we find a local maximum nearest to this starting pick point and then trace this
maximum to the left and to the right. When, the algorithm, for any reason, fails to trace the maximum
any farther, it extrapolates the last picked velocity as a constant value until the end (or the beginning) of
the frame.
It is possible to smooth the picked velocities before saving them. Though the velocities are determined
for SCDPs, smoothing average is run along all CDP points. The result is saved to the project database
as a HVT-object.
Module parameters
Output Velocity (HVT) – output horizon velocities along the input horizon. Select a database location
and specify a name for the new HVT-object here.
Guide velocity (VVT) – to be used for partial NMO correction during the offset binning. The options
are:
• Super gathering base (CDPs) – number of adjacent CDPs to be combined into an SCDP.
NOTE: This a number of total CDP gathers to be combined into one SCDP. That is, if the number
is 3, each supergather contains current CDP, a previous one (CDP-1) and the next one (CDP+1).
If you indicate and even number, it will be automatically increased to be odd (e.g. if you indicate
4, each SCDP contains 5 CDP gathers). It is also worth noting that not all supergathers will
contain the same number of CDP gathers, as by the edges of each frame the number of gathers
will gradually decrease down to a half of the specified base.
• Window size above/below horizon (ms) – this is the time window where supergathers are to be
made.
NOTE: When (Last offset - Start offset)/Step is a fractional value, the module will increase the
Last offset (m) to make the value an integer.
Semblance parameters:
• Hunt start point search window (traces) – number of semblances in the center of the frame to
be used for pick starting point evaluation.
• Guide window length (m/ms) – defines a maximum shift of a traced maximum between adjacent
traces.
• Correlation window length (m/ms) – the window where CCF is calculated.
• Local maximum level – on practice, tracing a certain phase using an absolute maximum of the
CCF is not stable and may result in jumping of the pick to some other phase. For this reason, the
algorithm searches not for the global CCF maximum but for a local one, which corresponds to a
smaller shift of the pick between the traces, though it is of a lower amplitude. The Local
maximum level parameter allows skipping the smallest local fluctuations from the search to
make it more stable. It defines the smallest possible amplitude of a local maximum to be
considered as (local_maximum_level * global_maximum_value). All local extremums with the
amplitudes smaller than that will be skipped and the search will go further.
• Perform correlation threshold test – if a normalized CCF value is smaller than the specified
value, the semblances are considered to have no correlation and the tracing is stopped. The last
successful pick value will be extrapolated until the edge of the frame.
• Smooth velocity pick – switch it on if you want to smooth the velocity pick before saving.
▪ Pick smoothing base (CDPs) – though the velocities are determined for SCDPs,
smoothing average is run along all original CDP points and the averaging operator is
defined in CDPs.
Output parameters determined what exactly the module will output to the flow. The options are:
• Pass input data – input traces will pass through unchanged. In order to automatically pick the
horizontal velocity law for several horizons at the same time, put the required number of Horizon
Velocity Auto-picker modules into the flow one after the other in Pass input data mode.
• Semblance – the output is the traces of semblances. In this mode the module can additionally
save the picked velocities into HORIZON_VELOCITY header of the output semblance traces.
If you want to use this functionality, please create the HORIZON_VELOCITY header manually
as it is not included into the default list of trace headers.
• Super gathers – the supergathers used for semblance calculation. This option can be typically
used for debugging and problem solving.
NOTE: When the output is Semblance, Number of thread – parallelization parameter. When set to 0,
the number of threads running in parallel is equal to the number of logical CPU cores available for the
operating system.
Velocity Editor
This module is meant for creation of two-dimensional velocity models, as well as their editing,
smoothing, and binding to coordinates.
The module interface consists of two parts: dialog box for initial parameters setting, and a primary
running window of the module during its execution. The dialog window for parameters setting can be
opened during module operation, but in this case all parameter changes will be active only during the
current module execution session.
Velocity editor is applied on manual manipulations with the velocity field. In the case of singlechannel
investigations, when the velocity field cannot be obtained directly from the data, the interactive velocity
editor is the only way to specify the velocity field manually using external information.
The velocity field is constructed by means of interactively defined polygons filled by certain velocity
values and with the help of a brush used to for color the velocity field manually.
In the flow, a velocity editor can operate on its own as an independent application. In this case in the
module window only the velocity field is displayed and being edited.
If the Velocity Editor is launched from the flow in which the stacked data are running, the velocity field
will be displayed against the time section background in the module window. The velocity field is
matched to the data according to a coordinate (distance) along the line, which shall be stored in the data
in an arbitrary trace header field and correspond to the coordinates specified in the velocity field file.
Parameters
The dialog window for the initial parameter specification for the module is as follows:
This tab contains parameters which define the position of files with polygons and velocity field on disk.
• Velocity file: Specifies the name for the text file that contains (or will contain) a velocity field.
To select a file, click the Browse... button or enter the file name manually.
• Polygons file: Specifies the name for the text file that contains (or will contain) the polygons
description. To select a file, click the Browse... button or enter the file name manually.
Data bounds tab contains parameters that define velocity field size. In case, there is an input dataset
passing through the flow, click the Header… button to define the trace header field of the input dataset
that contains the linear coordinate for matching with the velocity field. In case, there is no input dataset
available, this field will be ignored and all coordinate values will be in abstract units.
The Trace display tab contains parameters that affect the data visualization method. They make sense
only if some stacked data are running in the flow in which the module operates. The meaning of these
parameters is the same as that of parameters from Screen Display (the Common Parameters dialog
window) module dialog box.
Axis tab
This tab contains two fields (X Step and Time Step) where the intervals between labels on the
coordinate axes should be specified.
Working with the module
Parameters Tab. Allows initial module parameters changing, (see Parameters chapter).
Polygon Tab:
• New Creates a new polygon. For new polygon the editing mode is automatically switched on.
• Edit Switches on/of the edit mode for active polygon. In case if the edit mode is switched off,
the polygon can be made active by clicking the left mouse button (MB1) on one of its points.
The polygon edit mode makes it possible to add/delete/relocate polygon points, close and open
the polygon. To add a point to a polygon, click the left mouse button (MB1) at the desired spot.
The points are sequentially added to the polygon:
▪ To relocate the point, capture it by clicking the right mouse button (MB2) and drag it to
the desired place. When clicking and holding the right mouse button (MB2) the active
polygon point closest to the cursor position will be captured.
▪ To delete the point, click the left mouse button (MB1) near the point while
simultaneously holding the Shift key down. The active polygon point closest to the cursor
position will be deleted.
• Close Closes/Opens the polygon. This command is available only within polygon edit mode.
• Delete Deletes active polygon.
• Fill... Fills the active polygon with specified velocity values, possibly with horizontal and/or
vertical velocity gradient. To fill in the polygon, close it first. When this command is chosen the
dialog box for specification of velocity and two gradients appears.
Brush Tab:
• Parameters Serves to set the "velocity brush" parameters. When this command is chosen the
dialog box for "velocity brush" parameters specification appears. Here, specify the shape, size
and velocity of the brush.
• Brush Serves to switch the mode of model drawing by "velocity brush" on or off. In the brush
mode the oval or the rectangular (according to selected brush shape) are displayed at the cursor
position point. With the left mouse button (MB1) held down, when moving the mouse over the
area covered by the brush, the velocity specified in brush parameters will be set.
• “+” Serves to switch on/off the image zoom-in mode. In the zoom-in mode the mouse cursor
changes its shape. By clicking the left mouse button (MB1) you can enlarge the image by one
step. When doing this, the image is scrolled so that the selected point is in the center of the screen.
• “-” Serves to switch on/off the image zoom-out mode. By clicking the left mouse button (MB1)
you can diminish the image by one step.
Exit Exits the velocity editor. When setting parameters, if the files names for the velocity and polygon
fields were specified, then the changes will be automatically saved when exiting.
Interactive Velocity Analysis
The Interactive Velocity Analysis module is used for interactive analysis of the stacking velocities.
• SCDP – CDP supergather number. If the data input into the flow is performed using the module
Super Gather, this field is assigned automatically. Otherwise it should be assigned manually.
• OFFSET and AOFFSET – should contain offsets and their absolute values, correspondingly.
• ILINE_NO and XLINE_X – for 2D data, the first field should contain the CDP numbers (CDP),
the second field – any similar integer number.
The module interface consists of two parts: a window for initial parameters setting and a main window
of the module that appears while executing this module. The dialog box for parameters setting can be
opened while the module is running but, in this case, all parameter changes will be active only during
the current module execution session.
When the module is activated, the window containing 8 tabs will open:
1. Output velocity. Here, specify a database velocity pick object (recommended) or a text file that
will contain the resulting edited velocities. This tab is similar to the window that appears when
the NMO/NMI module is activated, except for the Single velocity function option which is
lacking in the tab.
2. Input velocity. Here, specify an existing input velocity field (a velocity pick object in the
database or a text file) to continue editing it. In case, the module is executed for the first time and
no velocity function exists yet, specify the same velocity pick object as in the Output velocity
tab. As a result, after you execute the flow with the module for the first time and save the
velocities, for the following sessions the same velocity field will be used as an input. Thus, you
will be able to continue editing it. This tab is similar to the window that appears when the
NMO/NMI module is activated.
3. Super Gather. On this tab, only the Bin offsets similar to that from Super Gather module
parameters option is available.
4. Semblance. This tab sets the parameters for semblance (velocity spectrum) calculation as in the
following:
5. Gather Display. This tab is used to set seismogram display parameters. The tab looks the same
as the Semblance Display tab.
6. FLP Display This tab is used for the setting of display parameters of the dynamic stack obtained
via the current velocity function picked in the Semblance Display panel. The tab looks the same
as the Semblance Display tab.
7. CVS Display This tab is used for the setting of display parameters for constant velocity stacks.
The tab looks the same as the Semblance Display tab.
8. Semblance Display. This tab is used for setting the velocity spectrum (semblance) display
parameters as in the following:
Tab Semblence
In the Start velocity and End velocity fields the start and end velocities for velocity searching should
be specified. To calculate the semblance the velocity search step should be set in the Velocity step field,
and the time search step in the Time Step field. The number of constant velocity stacks displayed on the
screen is specified in the Number of CVS field.
In the Display Mode field select the display mode for the traces of velocity spectrum:
• WT/VA displays the traces in wiggle trace and variable area mode
• WT displays the traces in wiggle trace mode,
• VA displays the traces in variable area mode
• Color displays the traces in variable density mode. By clicking the Palette... button you can
specify the color palette manually (when all color palette point are deleted, the traces are
displayed in grayscale).
When starting execution of the flow containing the Interactive Velocity Analysis module, a window
similar to the one shown below will open:
All the panels have one common Y axis (time axis, ms) but every panel has its own X axis: velocity axis
for Semblance display and CVS display, offset axis for Gather display.
Velocity function picking is accomplished in the semblance panel (Velocity or Semblance display).
Point adding is done by clicking the left mouse button (MB1) on the desired point.
Relocation of existing points is done by clicking the right mouse button (MB2). By doing this the point
closest to the cursor position will be relocated.
Point removal is performed by double-clicking the right mouse button (MB2) on the point while holding
down the Ctrl key (Ctrl+MB2 DblClick).
In order to enlarge a data display fragment horizontally or vertically, place the mouse cursor in the
desired axis area and click the left mouse button (MB1) on the point corresponding to one of the edges
of the fragment to be enlarged. Then, holding the button pressed, relocate the cursor to the point
corresponding to the other edge of the fragment. After releasing the button the selected data fragment
will be enlarged to the size of the whole panel.
In order to return to the initial scale on one of the axes, double-click the left mouse button (MB1) in
the corresponding axis area.
Tab File
• Horizon list
• Exit–exits the Interactive Velocity Analysis module.
Velocity Field
• Show previous: displaying the previous velocity function in the Semblance Display
• Show mean: allows displaying a mean velocity function in the Semblance Display
• Show Lay: allows displaying layer velocities calculated from Dix formula in the Semblance
Display
NMO
• Muting Percent allows stretch muting parameters specification in percentage. All data that has
stretched as a result of NMO on more than specified percentage will be zeroed. Tool Bar
Status Bar
At the bottom of the Interactive Velocity Analysis window there is a status bar.
In the left field of the status bar you can see information about velocity (V, m/s) and Time (T, ms) at the
point of the current cursor position. In the central field you can find information about the current
supergather (SCDP) and profile numbers (in 2D case) or about InLine, CrossLine numbers (in 3D case).
The information contained in the right field of the status bar is duplicated in the header bar of the module
running window.
IMPORTANT! In the Super Gather module the Bin Offsets option should be switched off. When
needed, binning parameters may be set in the Interactive Velocity Analysis module.
The interactive velocity analysis module (Interactive Velocity Analysis) can be launched from the Map
application by clicking the V button on the tool bar. All the parameters for the module should be specified
in the Interactive Velocity Analysis parameter dialog.
Velocity Analysis Precompute
This module is used for preliminary calculation of the velocity spectrum for Interactive Velocity
Analysis.
Parameters
The start and end velocities for velocity scanning are defined in the Start velocity and End velocity
fields. The velocity scanning step for calculation of the similarity coefficients is specified in the Velocity
step field, and the time scanning step is specified in the Time Step field. The number of constant velocity
stacks is defined in the Number of CVS field.
Velocity Curve Editor is designed for editing vertical velocity picks, saved to an object database.
Parameters of module
The module parameters allow adjusting the display of velocity picks for further editing:
Module operation
The module is standalone, i.e. it doesn’t need any additional modules in the flow, and it should be
alone there. To run the module, please specify the parameters and click the button Run.
Loading velocity picks
After running the module you have to load velocity picks from the database. To do so click on the button
on a toolbar, or select item Load Velocity from the menu File, or use shortcut key Ctrl+O. You
will be invited to select the required velocity pick from the database
After selecting velocity pick, it will be displayed in the main window of the module in the form of
velocity curves’ set, each of those is tied to a CDP point number.
Display parameters editing
If appropriate you can modify velocity curves’ display parameters, to do so you have to click on the icon
, or select an item Parameters... from the menu Tools, or use shortcut key Ctrl+P; after that
parameters’ adjustment dialogue box will appear (see item Module parameters).
An active velocity curve, displayed in red color is available after loading velocity pick. To change an
active curve you can either Ctrl+click on the new curve or call a dialogue of curve editing. You can
click on the icon , or select an item Edit Velocity Picks... from the menu Tools or use shortcut key
Ctrl+E. As a result the following dialogue will appear:
Here you can select a velocity curve being edited by CDP point number (CDP). You can edit an active
curve by dragging its nodes in the main window of the module via the left mouse button, or changing
the values in the table of the dialogue manually.
To save the velocity pick, having overwritten it, you can click the button on the tool bar or select
the item Save Velocity from the menu File, or use shortcut key Ctrl+S. To save velocity picks under
other name you should select the item Save Velocity As...from the menu File, or use shortcut key
Ctrl+Shift+S; after that a dialogue prompting to save the velocity pick to the selected folder appears.
Velocity manipulation*
This stand-alone module is aimed to transfer a vertical velocity function (like those created by Interactive
Velocity Analysis) from any given type and domain to any other given type and domain. It is also possible
to combine 2 input velocity functions into 1 output velocity function (provided that both input functions
are of one and the same type and in one and the same domain). In addition to the velocity function in the
form of a pick, the module also works with velocity traces.
Parameters
The input velocity function (First Input velocity) can be represented as:
If you need to combine 2 input functions into one output, check the box Combine second velocity
function with the first and select the second speed function in the field Input velocity. It is assumed
that both input functions are of the same type and are given in the same area.
As a result of the transformation, the input velocity function can also be interpolated and smoothed. To
do this, activate the Interpolate and smooth output velocity checkbox, while setting the necessary
parameters:
1) If no seismic traces pass through the flow and only the table with velocity functions is input, the
module for each velocity function (for each CDP actually) generates a velocity trace with
specified sample rate.
2) In case seismic traces pass through the flow, the module removes the amplitudes in them and
fills the samples in the traces with velocities that are calculated by interpolating the values
specified in the input table, by area. This mode allows you to visualize and estimate the NMO
velocities for each CDP point on the stack section.
Module operation window Velocity Table → Trace Transfer. The module operates in two modes: Seismic traces
are input to the module or seismic traces are not input to the module.
Parameters
• Velocity— the table with velocity functions which is uploaded by the user from the database.
The parameter supports the use of the replica system
• Assume interval velocities— this parameter affects how the module considers the velocities
from the velocities table loaded on the input. In case the parameter is active, the velocity values
are considered interval, and their values between the points of the velocity functions change
abruptly. If the parameter is not active, the velocities between the points of the velocity function
are interpolated linearly.
• No input trace expected. Generate new trace for each velocity function — this parameter
must be activated in case the seismic traces are not input to the module.
In case the parameter is active
In this case the algorithm generates a velocity trace for each CDP from the table Velocity. The trace
parameters are set by the user:
• Time step— the time step between the samples on the velocity traces.
• Number of samples— the number of samples in the output traces.
If this parameter is inactive the module will expect the traces at the input. If they are absent, the
algorithm automatically terminates with an error message. The number of samples in the trace and the
time step between the samples is the same as on the input seismic traces. The values for each trace are
interpolated from the table loaded into Velocity.
1) First, the algorithm builds traces according to the velocity functions specified in the Velocity
table.
2) After this, the algorithm interpolates these traces throughout the survey area.
3) Then, at the points where there are seismic traces, the amplitude values are replaced by
interpolated velocities.
4) At the output we have velocity cube/cross section.
Trace→Velocity Table Transfer
The module allows you to convert the velocity traces into velocity function tables. To do this, you need
to specify the time range for which the table will be created, the step between the samples on the velocity
trace, and the grid to which the velocities will be interpolated.
Module operation window Trace→ Velocity Table Transfer. In Panel 1, it is necessary to set the time ranges for
which the velocity functions table will be created, as well as the input data dimension. In Panel 2, it is necessary
to set it manually, or load the grid for which the velocity table will be compiled (see the following figure). In
Panel 3, it is necessary to specify the location in the database where the output file will be uploaded.
Parameters
• Start time/Maximum time. The Start time and Maximum time parameters define the range of
times for which the velocity functions table will be calculated.
• Time step is the step between the samples of the velocity traces to be recorded in the table.
• Dimensions – this parameter determines the dimension of the data that is input to the module:
2D cross-section from the velocity traces or 3D cube from the velocity traces.
• User's grid – the parameter is responsible for using the user survey grid. In case the parameter
is not active, a table with velocity functions is compiled for the existing survey geometry, even
if it is not uniform. If the parameter is active, the algorithm uses interpolation to calculate velocity
traces at the center of each bin of the grid, and only then a table with velocity functions is
calculated for the obtained traces.
The principle of the module operation in case of setting a user grid. Blue circles indicate points in which there
are velocity traces. These values are input to the module. Red circles indicate the points for which the velocity
traces are calculated by triangulation from the traces located in the place of the blue circles.
For 3D:
• Origin X – X-coordinate of the origin (the bottom left corner) of the grid.
• Origin Y – Y-coordinate of the origin (the bottom left corner) of the grid.
• Start Inline – the Inline number, from which the numbering in the grid starts.
• Start Xline – the Xline number, from which the numbering in the grid starts.
• End Inline – the Inline number, which ends the numbering in the grid.
• End Xline – the Xline number, which ends the numbering in the
Load grid mode – the grid is loaded by the user from the database
Save grid – the grid defined by the user is saved in the database.
For 2D:
• Start CDP – the number of the first CDP in the 2D grid.
• End CDP – the number of the last CDP in the 2D grid.
• CDP step – the step between the CDP numbers.
Output velocity – it is necessary to specify the path in the database where the velocity table will be
stored. The parameter supports the use of the replica system
Header Time/Depth Conversion
The module converts the header values recorded in the trace header from time to depth and vice versa using the
velocity model.
Parameters
Conversion tab
Header– specify the header to be recalculated in depth/time by the specified velocity model.
NOTE: the module writes the recalculated value to the same header. For example, if the horizon pick
time is stored in the PICK1 header, the module will write the recalculated value in depth to the same
PICK1 header
Use coordinate-based interpolation – select the method of interpolation of velocity values between
CDP. If the option is enabled, the program interpolates values using X and Y-coordinates of the CDP
points, for which the velocity law is set, if disabled - interpolation is performed by the CDP point
numbers.
Velocity tab
Database-picks – select this option to read the velocity law from the velocity picks stored in the project
database. To select a pick, click Browse....
Use VSP per-traces file – select this option to read the velocity from the VSP file.
Velocity domain – this parameter defines in what domain this velocity field is specified:
• Time
• Depth.
• RMS (root-meansquare)
• Interval.
• Use average velocities – when this flag is unchecked, the time-to-depth (or depth-to-time)
conversion is performed using RMS velocities (input velocity model, if not RMS, will be scaled
to RMS velocities). Otherwise, average velocities will be used for the conversion (input
velocities will be scaled to average velocities). In case, input velocities are RMS and the flag is
checked, average velocities will be calculated via interval velocities.
Stacking/Ensembles
Ensemble Stack
In the Ensemble stack module you can stack all traces within each ensemble of the flow into one trace.
Every trace sample at the output will be a combination of corresponding trace samples at the input. An
ensemble is considered as a group of traces with one and the same value of the primary sorting key
defined in Trace Input or other input routine.
Parameters
Parameters
Header to split ensemble – this field is used to specify the header based on which the splitting will be
performed.
Ensemble Split Undo
This is the “companion” module for Ensemble Split. It allows undoing the current ensemble splitting
performed by the Ensemble Split module. The module has no parameters.
Ensemble Redefine
The module is used to redefine ensemble splitting. In the Frame mode, input data from one frame are
sorted by the new header field and re-split into ensembles based on the specified field.
Parameters
Header to define output ensemble – header, according to which the traces will be splitted into new
ensembles.
Primary and Secondary header to sort within ensemble – headers by which sorting will occur
within the new ensemble.
CDP Stack To Dataset
The module allows to stack traces from processing flow to specified dataset. The fit of traces carry out
by the CDP header.
Parameters
Output dataset – specify dataset for which additional CDP stack will be carry out.
Treat zero as result of muting – the zero trace samples resulting from muting will not be counted in
the total fold cube.
Use fold dataset – - write dataset with fold to the same database folder with "_fold" prefix
Shot stack
The module allows to perform consecutive stacking of data of several shots, obtained with the same
source receiving configuration . To do so, you need to specify the dataset where the ensembles of traces
going through the flow will be subsumed.
Angle Muting
The module is designed to calculate the angle muting curves and to use muting in a range of the
specified boundaries of incidence angles.
• Calculation and registration of the angle muting curves in the database which can be used for
their display and analysis in Seismic Display module
• Direct data muting in the specified angle range
Incidence angle calculation is made according to the following formula (Walden, 1991):
𝑥 2 𝑉𝑖2
𝑠𝑖𝑛2 𝜃 = , where
𝑉𝑟2 (𝑉𝑟2 𝑡02 +𝑥 2 )
X - source-receiver distance
𝑉𝑟 - mean-square velocity
ATTENTION: CDP seismograms with NMO corrections are input into the module. When using
Framed mode, it is necessary to tick Honor Ensemble Boundaries checkbox so that CDP ensemble
stacks are input without breaking.
• Angle curves - as a result of the module's work, a new database object shall be created - a
curve corresponding to the central angle of the muting range which can further be visualized in
Seismic Display module. In the visualization module, it is necessary to activate a parameter in
order to display curves:
• Gathers – seismograms after muting (when inputing one CDP seismogram, a number of
seismograms, equal to a number of the central angles specified by user in the table, shall
correspond to it. At that, a number of traces in each seismogram shall remain unchanged)
Interval velocity – interval velocity law in the form of picking from the database. There are two ways
of velocity setting: choosing a velocity directly from the database ( ) or specifying the path to the
Angle ranges – table of central angles setting. Note that values of the lower and upper angles are
specified in the table, however, 1 curve with the central angle value corresponds to them in the
database.
For example:
3 lines with the following angle parameters are specified in the table
As a result of the module's work, 3 curves corresponding to the central angles of the specified ranges
shall appear, and namely: Angle_15, Angle_25, Angle_35 (given Angle_ parameter specified as a
prefix by the user)
Besides the clear indication, representation using the replica system is possible:
{@Band1_min}, {@Band1_max}, 0, 0
{@Band2_min}, {@Band2_max}, 0, 0
• Inner angle – a minimum value of the angle forming the angle muting range
• Outer angle - a maximum value of the angle forming the angle muting range
• Min offset – intersection of min offset value with the outer curve forms the area after which
muting takes place (for more details, see Figure below)
• Max offset - intersection of max offset value with the inner curve form the area in front of
which muting takes place (for more details, see Figure below)
Lines added/removed using the buttons Append row ( ), Remove selected rows ( ).
Angle muting with the parameter values min offset=0 max offset=0 before and after application
Angle muting with the parameter values min offset=99 max offset=0 before and after application
Angle muting with the parameter values min offset=0 max offset=86 before and after application
Parent location of output angle curves - the path where the module's work result is recorded.
Name prefix of each output angle curve - the specified name is combined with the central angle value
taken from the above table.
Number of Threads - there is a possibility of including the parallelizing computation option to
increase computing performance and speed. For this purpose, specify the number of cores used in the
module's work.
Gathers mode
Tapering – number of counts, expressed as a percentage of the initial trace length, added into the
initial trace on both sides.
Central angle output header - the header where the central angle value shall be written
Fill in the module parameters as shown in the figure below. Based on the table, the curves
corresponding to 3 central angles - Angle15º, Angle25º, Angle35º - shall be obtained as a result of the
module's work.
Activate the option to show Angle Curves in the visualization module Seismic Display in the tab
Display Parameters
In line Mask, write down an expression making it possible to display all curves obtained as a result of
the module's work. For this, specify the directory where the curves are stored, and further enter the
name corresponding to the parameter Name Prefix of Each Output Angle Curve with an asterisk in
the end indicating that all curves shall be chosen. It is also possible to specify the angle range for
visualization, for that use the expression:
Name Prefix of Each Output Angle Curve<a,b> where a and b represent the upper and lower angle
ranges for visualization.
In the example below, all curves with the central angle value within the range from 10 to 30 degrees
shall be visualized. Turn on legend on the central panel in the tab Display Parameters in order to
display the labels.
For convenience of visualization, fill in the table for two central angles:
Leave by default the header where information about the central angle value shall be stored -
CENTRAL_ANGLE.
As a result of the module's work, a number equal to a number of the central angles (number of rows in
the table) shall correspond to one CDP seismogram input. Thus, two CDP seismograms with the
number 600, etc. are formed in our example. Only values falling within the muting range, specified in
the table, remain in each seismogram formed. Thus, for example, traces within the range of 10-20
degrees, for 25 - 20-30 degrees, remained for the central angle of 15 degrees.
Angle Stack
The module performs stacking within CDP seismogram in the angle range specified by the user.
As a result of the module's work, the angle range corresponding to the central angle of each CDP
seismogram is converted by stacking into one trace with the header equal to the central angle value
ATTENTION: the number of input seismograms is equal to the number of output seismograms;
however, the number of traces in a seismogram at the input and output shall differ. The number of traces
after stacking within each seismogram shall be equal to the number of central angles specified in the
table (the number of lines).
The module parameters:
Interval velocity – interval velocity law in the form of picking from the database. There are two ways
of velocity setting: choosing a velocity directly from the database ( ) or specifying the path to the
Angle ranges - the table of the specified angles within the range of which the stacking shall be
performed, and the resulting trace shall correspond to the central angle of this range.
Besides the clear indication, representation using the replica system is possible:
{@Band1_min}, {@Band1_max}, 0, 0
{@Band2_min}, {@Band2_max}, 0, 0
• Inner angle - a minimum value of the angle forming the angle stacking range
• Outer angle - a maximum value of the angle forming the angle stacking range
• Min offset – intersection of min offset value with the outer curve forms the area after which
muting takes place (for more details, see Figure below)
• Max offset - intersection of max offset value with the inner curve forms the area in front of
which muting takes place (for more details, see Figure below)
Lines added/removed using the buttons Append row ( ), Remove selected rows ( ).
Angle muting with the parameter values min offset=0 max offset=0 before and after application
Angle muting with the parameter values min offset=99 max offset=0 before and after application
Angle muting with the parameter values min offset=0 max offset=86 before and after application
• N - count values are summed up and divided by the total number of counts (N)
• √N - count values are summed up and divided by root of the total number of counts (√N)
• None - normalization is not used
Central angle output header - the header where the central angle value shall be written
Module parameters
Input dataset – choose dataset to be migrated
Dimension – define type of survey – 2D or 3D
Define velocities:
• From DB – select velocity picks from database. Velocity picks can be obtained by Interactive
Velocity analysis module, Horizon velocity analysis module or importing existing velocity table.
• Manual – this option allows user to define velocities manually. Press View table to edit velocity
table:
▪ Number of lines – enter number of lines in the velocity table. They will correspond to
number of CDP positions. Velocities will be linearly interpolated between defined values.
Format of the string is the following:
Time(ms):Velocity1(km/s), Time1-Time2:Velocity2(km/s),….
In case of 3D survey, select Inline and Crossline numbers to assign velocity values.
Geometry
• 2D mode:
▪ X step (m) – select the correct distance between CDP positions.
▪ CDP increment – increment between CDP numbers, should be equal to the step of
sequential CDP positions.
• 3D mode:
▪ X step (m) – select the correct distance between Crosslines
▪ Y step (m) – select the correct distance between Inlines
▪ Xline increment – increment between Crossline numbers which should be equal to the
step of sequential XLINE_NO header values
▪ Iline increment – increment between Inline numbers which should be equal to the step
of sequential ILINE_NO header values
Sample interpolation. Interpolation of amplitude values between samples can be done linearly or by
cosine function. Select desired way of interpolation here.
Anti-aliasing To prevent spatial aliasing triangle or boxcar filters are realized. If spatial aliasing is not
a concern, turn this option off to increase computation speed.
Maximum frequency to migrate (Hz) – define maximum valuable frequency in your data. During
migration process data will be resampled according to this frequency to increase computation speed.
Data should be filtered first in the given frequency range to avoid artifacts.
Migration aperture
Amplitude summation range can be limited by both angle and range:
• Angle aperture – select angle aperture. Choose 90 to use the maximum aperture equal
• Angle aperture tapering (m) – to avoid edge effect while summation process tapering on both
ends of aperture should be done. Define tapering range in degrees here. (!!!)
• Range aperture (m) – define aperture in meters. 0.6*D, where D is the target depth -could be
a good starting point to test the migration parameters. Migration run time is mostly depends on
the aperture range value.
• Range aperture tapering (m) – to avoid edge effect while summation process tapering on
both ends of aperture should be done. Define tapering range in meters here. In 3D case user
should define both X (corresponds to crosslines) and Y (corresponds to inlines) range
apertures.
Pre/Post-Stack Kirchhoff Time Migration*
This is a standalone module, i.e. it does not require any input or output modules, and must be the only
module in the workflow.
The module performs 2D and 3D Kirchhoff migration before and after stacking in the time domain. A
dataset and RMS velocities obtained through velocity analysis (Interactive velocity analysis module) or
manually defined by the user are input into the module. In the process of migration, the velocity values
are interpolated in time and between the CDPs.
Module parameters
Dimension – select the data dimension: 2D profile or 3D cube. If post-stacking data are used, you need
to check the Post Stack box.
Input/Output tab
Input datasets:
• Add – select the dataset for migration. Depending on the selected dimension, pre- or post-stacking
2D or 3D data need to be input into the module.
• Add mask – use the replicas system to import files. (see section Replica system)
• Remove – remove selected files from the list Input datasets
After adding a file in the Input datasets window, you can edit it
Output datasets:
• Dataset – select the output dataset where the migrated data will be written
• Location – directory where the generated file will be stored
• Overwrite – the dataset will be overwritten
• Add to existing – allows you to add data to an existing dataset without overwriting the original
content (active if 3D is selected)
Don’t resample back – allows not to return the data to the original sampling step. The sampling step
will be determined based on the frequency specified in the tab Parameters – Max frequency to
migrate.
Velocity:
• Choose From Database – select a velocity function from the database. The velocity function can
be obtained using the Interactive velocity analysis module. You can also load a custom velocity
function into the database using the Database Manager application.
• Velocity dataset – choice of velocity function represented as velocity traces
• Manual – define the velocity function manually. Press View table to edit the velocity table:
The buttons allow adding and deleting lines to/from the table. To delete a line, select it and press
the “-” button.
The number of lines is the same as the number of CDPs for which the velocity function is defined.
For 2D data, velocity for the points is defined based on the CDP number. For 3D data, it is defined based on
the inline and crossline numbers (INLINE and XLINE).
The velocities will be interpolated linearly between the specified CDPs. The velocity definition format is as
follows:
• Input Buffer Size – size of the memory block used for data input in Mb.
• Output Buffer Size – size of the memory block used for data output.
• Max number of threads – number of threads into which the process will be split during the
execution of the module. This is used to accelerate the processing. The maximum value must
never exceed the number of CPU cores.
Migrate from surface – option to account for actual depths of sources and receivers relative to datum:
• Source depth from final datum (ms) – source depth from final datum
• Receiver depth from final datum (ms) – receiver depth from final datum
Parameters tab
Aperture Parameters
• Aperture – the migration aperture is defined as the distance from the migration point at specific
times in meters. The string format is as follows: time(ms):aperture length(m), time:aperture
length,... (for example, 0:300, 300:400, 700:500).
IMPORTANT! The aperture value must always increase with time.
A value equal to 0.6*D where D is the target horizon depth is a good starting point for the migration
aperture. The migration time is directly dependent on the migration aperture size.
• Apply Aperture Taper – smooth attenuation at the ends of the migration operator (tapering) is
implemented to avoid edge effects during the stacking procedure. Tapering is specified as
percentage of the aperture.
Anti-aliasing:
The problem of spatial aliasing during migration can be addressed by using smoothing filters. However,
you can disable this option to speed up the migration process.
Stretch muting – stretching results from the input trace being projected onto the output trace. Two
consecutive samples are associated with two input values separated by a relatively small time increment
(smaller than the time step). You can find theory at the end of this section.
Maximum frequency to migrate (Hz) – all frequencies above this one will be filtered out, and the
traces will be resampled to an interval at which the specified frequency will become the Nyquist
frequency. This technique speeds up the migration procedure significantly. After completion of data
migration, the original sampling interval is restored. It is recommended to pre-filter data to the specified
frequency range.
Shaping filter – applies a derivative operator. The pulse waveform changes during migration. The figure
below shows the original zero-phase pulse and the same pulse after Kirchhoff 2D migration without
application of folding with a derivative operator. To restore the zero-phase signal, a Rho filter or Shaping
filter is applied. Depending on the dimension of the migrated data, you can select a 2D or 3D filter or
migrate the data without a filter.
Migrate offset bins separately – when the flag is, in case of pre-stack migration, the output will be a
set of migrated common offset gathers rather than a comple migrated stack image.
Offet binning. The common offset gathers will be formed according to Offet binning parameters:
𝑑𝑡𝑚𝑖𝑔
𝑠𝑡𝑟𝑒𝑡𝑐ℎ𝑖𝑛𝑔 =
𝑑𝑡𝑖𝑛𝑝
If the input data consist of a 3D cube, the input and output data geometry is configured on the 3D Output
Grid tab.
The grid to which the output data will be migrated can be defined in two ways:
• Loaded from a file (such a grid can be created using the 3D CDP Binning procedure). After the
grid is loaded, its parameters will be shown in the corresponding fields under Grid Parameters.
• Set grid manually
Inline/Xline:
▪ Origin – X/Y coordinate of the grid origin (bottom left corner).
▪ Step – horizontal/vertical cell (bin) size.
▪ Count – number of Inline/Xline cells.
Axis azimuth – grid rotation angle (clockwise) in degrees.
The output grid can be additionally limited by Inline and Xline.To do this, specify the minimum and
maximum inline and crossline values to be included in the calculations under Output Bin Limits 3D.
If the input data consist of a 2D profile, the output dataset geometry is configured on the 2D Output Grid
tab. Only one option is currently available – migrating to the CDPs contained in the input file.
It’s recommended to launch migration using only graphics cards not connected to monitors.
If you use graphics cards of types Quadro, Tesla or GeForce, turn it to special mode TCC (Tesla Compute
Cluster). In the mode cards will only be used for calculations and won’t be able to image graphics.
For turning to the mode use software NVIDIA SMI that is available in the folder:
C:\NVIDIA Corporation\NVSMI.
where “number” - numerical order of the graphics card in the list of installed cards (starting from 0!!)
Tesla cards are usually in the needed mode initially so you don’t have to do anything with them.
https://docs.nvidia.com/gameworks/content/developertools/desktop/nsight/tesla_compute_cluster.htm.
Stolt F-K Migration
The module performs Stolt migration of seismic gathers, acquired at zero source-receiver distance. It is
assumed that the traces are equidistant with the interval between the neighboring traces, assigned by
dx parameter (in m).
Module parameters
• Dip. slope – range of dips, for which a smooth spectrum roll-off is to be done (∆θ).
• Bottom tapering – time window at the bottom of the trace, for which a smooth bottom tapering
is to be done (ms).
Scheme of the used part of the 2D spectrum.
STOLT3D (3D Stolt F-K Migration)
The module performs 3D F-K Stolt migration. The seismic cube with zero source-receiver offset is input.
Traces are assumed to be located on a regular rectangular grid with no gaps. If there is no data in certain
sections of the rectangular grid, they must be filled with zero traces using the "Pad input cube with
zeroes" tool inside the module, or in some other way.
Module input: traces sorted by the headers ILINE_NO (inline numbers) and XLINE_NO (crossline
numbers). There should be no missing traces in the dataset. If there are any, it is necessary to fill the gap
in any way mentioned above.
ATTENTION! The module can operate in two modes: «Stand-alone mode» and «Flow mode». In
"Stand-alone mode", the module must be the only one in the flow and does not require input and output
modules. During operation it is recommended to use the module in the "Stand-alone mode", which is
connected with optimization of the memory and time used by it.
Parameters
Input dataset – path to input dataset which will be migrated.
Output dataset – path to the dataset which will be obtained as a result of module operation.
Figure 1. Layout view of the dataset in the module "3D Stolt F-K Migration". Green stands for the traces that
were acquired in the field. Yellow stands for the area in which the survey was not conducted. For correct
operation of the module, it is necessary to fill the gaps (yellow) with zero traces. This can be done using "Pad
input cube with zeroes" parameter. Also, for the correct migration, it is necessary to add zero traces along the
edges of the rectangle of the original cube. This is done using "Padding" parameters. The values of the traces at
the junction of the regions are smoothened using the "Tapering" parameter. Traces that participate in smoothing
are highlighted by double red hatching.
Variable velocity (m/ms) – Stolt cascade migration mode, which takes into account variable velocity
(m/ms). Velocity may vary both vertically and laterally. The algorithm implements classical approach,
stretching time axis of each individual trace. After activating this mode, a dialog box appears with
velocity law settings:
There are three ways to specify velocity function:
Activate the Single velocity function (constant velocity function) option and specify it manually.
• Activate the Get from file option and specify velocity function from file. To do this, click the
Browse... button, select the desired file in the opened standard dialog box.
• Activate the Database picks option and specify velocity function previously saved in the project
database. To do this, click the Browse... button, select the desired database object in the opened
standard dialog box.
Save template and Load template buttons are meant for current module parameters saving in the
template in the project database and for saved parameters loading from previously saved template,
respectively.
W-factor – constant, which should be specified for each stage of cascade migration. W-factor is
simplification of a complex function that characterizes the deviation of velocity in a given interval from
the constant velocity law. The parameter varies in the range from 0 to 2, in the case of constant velocity
W=1.
Migration parameter
Max. frequency to migrate – maximum frequency ( fMAX ) that contributes to image generation (in Hz)
Frequency declining interval – frequency declining interval ∆ f , the spectrum is multiplied by the
𝑓−𝑓𝑚𝑎𝑥 +∆𝑓
function 𝑤(𝑓) = 0.5 + 0.5cos (π ) within the range of this interval
∆𝑓
Dip. slope – range of dips, for which a smooth spectrum roll-off is to be done (∆θ).
Bottom tapering – time window at the bottom of the trace, for which a smooth bottom tapering is to be
done (ms).
Padding is the panel that is responsible for filling the edge regions of the cube with zero traces (hatching
in Figure 2). By default, the region is not active, and the fill parameters are determined automatically
by the module for the input dataset. In this case, the number of added inlines and crosslines coincides
with their number in the rectangle of input dataset, and the number of samples added to the traces
depends on the length of the traces in the input cube.
To specify user values for the filling parameters, place a checkmark on "Use custom padding" field.
• Inline padding (traces) - the number of traces that are added to each inline along the Crossline
axis.
• Xline padding (traces) - the number of traces that are added to each crossline along the Inline
axis.
• Time padding (samples) – the number of zero samples that are added to each trace (similar to
the hatched regions).
ATTENTION! From the theoretical point of view, the number of zero traces on the sides should be
greater than or equal to the migration aperture, which can be estimated by the hyperbola with an
appropriate velocity.
Pad input cube with zeros is a parameter that fills empty regions inside the input data rectangle (the
yellow region in Figure 2).
Tappering is the panel that is responsible for smoothing the values at the junction of the added zero
traces and the original cube traces. In case there is no smoothing, migration artifacts associated with the
boundary regions may appear on the section.
• Inline taper length (traces) – the number of traces along the Crossline axis that are involved in
smoothing.
• Xline taper length (traces) – the number of traces along the Inline axis that are involved in
smoothing.
• Time taper length (samples) – the number of zero samples that are involved in smoothing.
Grid is the panel that is responsible for loading the survey grid into the data cube. The grid specifies the
step between inlines and crosslines (dX, dY), and with its help in case of filling the gaps with zero traces
it is possible to easily restore their coordinates CDP_X and CDP_Y.
In the standard case, it can be downloaded from the database using Load from database. If you want to
specify your grid configuration, you can do this in the Manual mode.
• Origin X and Origin Y are the coordinates of the origin of the grid (bottom left corner).
• Rotate is the grid rotation angle.
• Inline Step and Xline Step – a step in the numbers of inlines and crosslines.
• Start Inline and Start Xline – initial numbers of inlines and crosslines from which the grid
will be further constructed.
Use Grid Limits – a field that allows you to specify the region within which the migration will take
place. To do this, place a checkmark on "Use Grid Limits", after which the region for filling the
rectangle borders becomes active. In case the limits of the rectangle specified by the user are larger
than the limits of the input data rectangle, it is necessary to fill the missing regions with zero traces.
T-K Migration
This module is meant for T-K migration of single-channel data sorted by general offset or summed by
CDP. The migration is accomplished within time-wave -number domain (T-K). The primary advantage
of this method is its high accuracy in case of steep and even upset reflecting horizons and velocities
varied with depth. The main disadvantage of this method is that lateral velocity variations are not taken
into account. Just as in any other migration method the correct specification of migration velocities is
critical. This method requires the user to specify interval velocity as a time function. It should be noted
that interval velocities are usually higher than mean-root-square ones.
Parameters
• Interval velocities for migration - in this field, specify the interval velocitiypairs which will
be used while migration as pairs of time: interval _velocity values (pairs are separated by
commas). Time is expressed in ms, velocity - in km/s.
• Number of traces to pad – here, specify the number of zero traces added to data before BPF is
accomplished in order to reduce BPF cyclic effects.
• Maximum frequency to migrate (Hz) – maximum coherent frequency (in Hz) which is
presumably contained in data. At frequencies higher than the one indicated in this field the
dispersion is possible after migration. The lower the indicated value is, the higher the migration
calculation speed will be.
• Speed (>>1 for significant dispersion) – when increasing this factor up to the value greater
than one you decrease the computation time by means of increasing high frequencies dispersion
for boundaries with big slopes. The value set by default (1.0) should not cause significant
dispersion. This parameter is especially useful while testing various velocity functions.
• Edge taper length (traces) – edge taper length expressed in traces. Both edges of the section
will be smoothed by weighting function in the window with specified length. The most external
traces will be zero.
Dip-moveout correction
Deregowski in his “What is DMO?” (1986) article states the following DMO principles:
1. Migrate each trace to zero offset so that each common-offset section becomes identical to a zero
offset section.
2. This in turn implies that post DMO, but prestack, common-midpoint gathers contain the reflections
from common depth points as defined by normal incidence rays. That is, reflector point dispersal for
non-zero offset traces is removed
3. Cross-line ties are improved because a zero offset trace is the same regardless of the direction of the
offsets from which it is derived.
4. Dead traces are interpolated according to local time dips without those dips having to be estimated
by a separate operation.
5. Coherent noise with impossibly steep dip is removed, without the artificial alignments often
associated with dip filters, and at the same time steeply dipping fault planes are better imaged
alongside horizons with smaller dips.
6. The signal-to-noise ratio is improved, especially at high offsets.
7. Stacking velocities become independent of dip, so that correct stacking of simultaneous events with
conflicting dips is made possible.
8. Velocity analysis is improved, and provides velocities which are more appropriate for migration as
well stacking.
9. Diffractions are preserved through the stacking process so as to give improved definition of
discontinuities after post-stack migration.
10. Post-stack time migration becomes equivalent to prestack time migration, but at considerably less
expense.
Before applying 2D F-K DMO module, use the Offset DMO binning* to create constant offset bins
with a predefined step. As the result, DMOOFF header field will be filled in. This field should be used
to create common offset sections by selecting DMOOFF:CDP selection fields.
Parameters
CDP step (m) – select the correct step between CDP positions
Offset DMO Binning*
2D F-K DMO requires constant common offset sorting (DMOOFF:CDP). Offset DMO Binning fills in
DMOOFF header according to the selected parameters:
Parameters
• Distance to the center of nearest offset bin – distance to the center of nearest DMO-offset bin
in meters
• Bin increment – increment for DMO-offset binning
NOTE: Offset values in OFFSET header field that fall into the range:
“Bin increment/2 – (bin center) + Bin increment/2” will be assigned to the same Offset-bin
(DMOOFF header field) with the current bin center value. Bin increment is always positive,
while offset values can be negative as well.
• Maximum number of bins – by selecting maximum number of Offset-bins, the maximum value
of offset-bin can defined.
• Use absolute offset values – absolute values of offsets will be used.
• Dataset – choose dataset to calculate DMOOFF header field.
Dip-moveout processing sequence (Yilmaz, O, Seismic Data Analysis, 2001, v.1 p. 692):
1) Perform velocity analysis at sparse intervals and pick velocity functions with minimum dip
effects
2) Apply NMO correction using flat-event velocities
3) Sort data to common-offset sections (Use DMO Offset binning* module) and apply DMO
correction, sort data back to CMP gathers
4) Apply inverse NMO correction using flat-event velocities from step 1
5) Perform velocity analysis at frequent intervals as needed to derive an optimum stacking
velocities
6) Apply NMO correction using optimum stacking velocities
7) Stack the data and migrate using an edited and smoothed version of the optimum stacking
velocity field
VSP
VSP Migration
This module is obsolete. It is recommended to use the 2D-3D VSP Migration module with similar
parameters.
This module is obsolete. It is recommended to use the 2D-3D VSP Migration module with similar
parameters.
2D-3D VSP Migration
This module performs Kirchhoff migration of VSP seismograms [Dillon, 1990] and VSP-CMP
transformation. The migration method is based on a horizontally layered model and does not take
lateral velocity changes into account. The input of the module is a VSP seismogram passing through
the flow with separated nonconverted reflected wave field and completed DEPTH, REC_X, REC_Y,
REC_ELEV, SOU_X, SOU_Y, SOU_ELEV header fields.
• A layered velocity model with sufficient layer thickness (when 10-meter observation intervals
are used, introduction of layers less than 50-60 m thick into the model is justified only in
certain cases) is built based on the near shot point data. In some cases, vertical anisotropy may
be taken into account.
• A velocity model is selected in the space below the well bottom based on prior data.
• After the nonconvereted reflected wave field of the near or far shot point is cleared of
interference and noise, it is used to obtain a migrated profile.
Parameters
Model file – this field allows selecting a migration velocity model file. Such file may be generated,
for example, by building a velocity model in the Advanced VSP Display module (see the description
of this module for the file format). Click the Browse… button to select a file.
Z Start of the image, Z End of the image – start and end depths (m) of the migrated section.
Sample interval of the image – sample interval (m) of the migrated section.
Preferred boundary slope, Preferred slope range – these two parameters allow adjusting the filter
for the expected dip moveout and specifying the expected angle and angle range (in degrees),
respectively.
PS Wave migration – converted PS wave migration. This box is active when the 3D output option
is disabled.
Mute unaccessible area – muting of seismogram areas corresponding to “invisible” (within the
horizontally layered model and the known observation geometry) medium areas. This box is active
when 3D Output is disabled.
Transform only, do not migrate – if this option is enabled, the module performs VSP-CMP
transformation; if it is disabled, the module performs migration.
Straight rays – if this option is enabled, migration will be performed in the time domain, and interval
velocities will be converted to RMS velocities. 3D Velocity model – read/do not read the text file
containing the velocity model for migration from the 3D format (format description is provided
below). This option is active only when 3D output is enabled.
Uneven velocities – if this option is enabled, different velocities will be used for rays going from
the source to the image point and from the image point to the receiver in the process of migration.
The velocity function for each point is defined based on the specified 3D velocity model as a value
corresponding to the middle of the image – source and image – receiver distance. If this option is
disabled, the same velocity function will be used for every image point with the same horizontal
position. This option is active only when 3D output is enabled.
Extract velocity – if this option is active, an interval velocity field will be displayed in the migration
result visualization window instead of the migrated image (this option is still under development).
▪ None
▪ 1/2
▪ 1
Wavelet shaping for 2D Kirchhoff migration is determined by a 45-degree constant-phase
spectrum and an amplitude spectrum proportional to the square root of the 2D migration frequency
(Derivative = 1/2).
For 3D Kirchhoff migration, the phase displacement is equal to 90 degrees, and the amplitude is
proportional to the frequency (Derivative = 1).
For VSP-CMP transformation, the phase displacement is equal to 0 degrees (Derivative = none).
▪ by points – the line is defined using points, i.e. sections with coordinates specified
interactively in meters in the X1, Y1; X2, Y2 format.
▪ automatically, using depth (m) – the migration line is defined automatically. The
maximum depth is entered into the box, and the migration line is calculated relative to the
first source position in the specified depth range (enabling this option is not recommended
when working with data obtained using the “Walk-Away VSP” method).
▪ dx – interval along the profile in meters.
▪ Spline – use/do not use spline smoothing.
3D Output geometry – a set of parameters used to define the 3D migration grid. This option is
active only when 3D output is enabled.
The module reads the velocity field from a text file with the following structure:
1st line should always begin with #VSP and may contain any comments
2nd line: the number of cells into which the X axis will be divided, and interval on the X axis in meters.
Comments may be added after a # symbol.
3rd line: the number of cells into which the Y axis will be divided, and interval on the Y axis in meters.
Comments may be added after a # symbol.
4th line: X coordinate of the grid origin, Y coordinate of the grid origin, and Y axis azimuth.
Starting from the 5th line, all lines contain the velocity function defined for the set of layers. Each line
consists of layer base depth, velocity, total number of layers in this velocity function, number of cells on
the X axis, and number of cells on the Y axis.
2 10000.000 #NX, DX
2 10000.000 #NY, DY
1410 2.0 2 0 0
4010 2.1 2 0 0
1410 2.0 2 0 1
4010 2.1 2 0 1
1410 2.0 2 1 0
4010 2.1 2 1 0
1410 2.0 2 1 1
4010 2.1 2 1 1
The module produces a migrated section or a VSP-CMP section plus a one-dimensional velocity model
(distribution of velocities along the profile for a single depth or along the depth for a single image
position on the axis).
Comments on use
Regardless of selected parameters, the medium is assumed to be one-dimensional. It means that ray
tracing is performed for a horizontally-layered medium, and the method of migration in the time domain
assumes the medium to be onedimensional for each ray (however, different rays may have different
velocity functions).
A velocity function should be defined for every node of the regular 2D grid. The following approach is
used to obtain velocity functions for points with specified coordinates:
1) If the point for which velocity is to be determined falls outside the grid and is not located between
any two lines serving as grid line extensions (point A in the figure below), the velocity value for that
point is selected equal to the nearest grid node (in this case – point C in the figure).
2) If the point for which velocity is to be determined falls outside the grid and is located between
any two lines serving as grid line extensions (point B in the figure), the velocity value for that point is
determined using the values in the nearest 2 nodes (in this case – points D and E in the figure).
3) If the point for which velocity is to be determined is located inside the grid (and, therefore, inside
a certain cell), the velocity value for that point is determined using the values in the nodes of the cell
containing the point in question.
If the velocity value for a given point is determined using more than one velocity function, the following
principles apply:
First all velocity functions are normalized for the number of layers in the model. This is achieved by
determining velocity values at layer boundaries and then applying linear or bilinear interpolation to
determine velocity values for each point.
2D VSP/Crosswell Kirchhoff Depth Migration
This module performs depth migration of 2D VSP or crosswell seismic data. It extends the
functionality of an older 2D-3D VSP Migration module to accommodate laterally variable velocity
models. Additionally, it introduces new capabilities, including the migration of crosswell data and the
migration of multiples.
This is a standalone module that does not require any additional modules in the workflow. Datasets
from the RadExPro database are input into the module.
The module is capable of inputting VSP data in any sorting. However, organizing the traces into
common-shot or common-receiver gathers can enhance computation speed. This efficiency is due to
potential reductions in the time required to load source and receiver travel times from the disk.
For proper migration, several headers are essential. The required headers include:
The cable depth is used to assign the receiver numbers to traces along the well for travel time
computation. We assume that two traces belong to one actual receiver if their cable depths differ from
each other by less than 1 cm.
The module projects the source and receiver coordinates in SOU_X, SOU_Y, REC_X, REC_Y onto a
straight line on the surface, which is defined by the Start point X, Y coordinate, [m] and End point
X, Y coordinate, [m] parameters.
The headers REC_ELEV and SOU_ELEV denote receiver and source elevations, respectively, which
correspond to a specific datum. The REC_ELEV and SOU_ELEV axes point downwards (receivers
and sources below the datum will have positive values of REC_ELEV and SOU_ELEV). One needs to
pay special attention to selecting the datum. Since most velocity formats in RadExPro are defined
from the zero datum level, we normally compute travel times from datum elevation equal to 0 to the
maximum depth specified in the module parameters. If there are sources or receivers above the datum
(indicated by negative values of REC_ELEV and SOU_ELEV), the model is extended upwards to
include them.
NOTE: Migration is computed starting from the datum level. In other words, the first sample in the
migrated traces corresponds to the zero elevation. Regions of the subsurface situated above the datum
are not imaged.
Parameters
Output dataset – output dataset for the storage of the migration result.
Travel time mode – a switch which identifies how travel times are obtained. There are two options:
• ‘Compute and save to a dataset’ – travel times are computed before migration and saved to a
dataset.
• ‘Load from dataset’ – if travel times were previously computed with the same source-receiver
geometry and the same velocity model, they can be loaded from a dataset to save time.
The most common use for this switch is to run the migration with ‘Compute and save to a dataset’
once, then switch to ‘Load from dataset’ to speed up the subsequent launches during migration
parameter testing. The travel times need to be recomputed if the survey geometry, velocity model or
velocity smoothing parameters change. The travel times are saved to a RadExPro dataset, they can be
displayed with tools like Seismic Display or Screen Display.
Wave type – type of waves in the input data for migration. The options are:
2D Grid parameters – this group of parameters sets the 2D output grid onto which the data is
migrated. Here, the user sets the binning line and the necessary grid spacings. During the processing,
the sources and receivers are orthogonally projected onto this binning line. If some sources or receivers
do not orthogonally project onto the binning line (for example, if some sources are further from the
well than the end point of the line), the grid will be automatically extended to include these sources or
receivers. It is, however, suggested to come up with a binning line which includes all sources and
receivers, as the process of automatic extension is not controlled by the user. If an alternative
projection strategy is required by the user, they may apply tools such as Trace Header Math to set up
a projected 2D geometry manually and use this projected 2D geometry in migration.
• Start point X coordinate, [m] – X coordinate of the starting point of the binning line (for offset
surveys, this can be the X coordinate of the wellhead).
• Start point Y coordinate, [m] – Y coordinate of the starting point of the binning line.
• End point X coordinate, [m] – X coordinate of the end point of the binning line (for an offset
survey, this can be set to the source coordinate).
• End point Y coordinate, [m] – Y coordinate of the end point of the binning line.
• Lateral spacing, [m] – lateral spacing of the migration grid.
• Depth spacing, [m] – spacing of the migration grid along depth.
• Depth interval, [m] – minimum and maximum depths of the migration grid.
Travel time computation parameters – this group of parameters sets the storage of travel times. The
travel times are computed before the migration and are then used during the migration process.
• Travel time dataset – name of dataset where the travel times will be saved.
• Travel time decimation along x – decimation factor along the lateral axis for travel time
storage. For a decimation factor equal to n, every nth sample of the travel time grid along this
axis is stored on the disk. For migration, the grid is interpolated back to its original size.
Increasing the travel time decimation factor decreases the size of the travel times on the disk and
simultaneously reduces the accuracy and increases the computation time.
• Travel time decimation along z – decimation factor along the vertical axis for travel time
storage.
NOTE: Travel times may occupy even more disk space than the data. The size of the dataset with
travel times depends on the number of sources and receivers, as well as on the grid settings.
P-wave velocity – this group of parameters sets the velocity model for migration. There are a few
options for the format of input velocities – Single function, User (text) file, Database picks, Velocity
dataset and VSP file (created in Advanced VSP Display).
Horizontal velocity smoothing, [m] – size of the Gaussian velocity smoothing window along the
horizontal axis (window size is equal to three standard deviations).
Vertical velocity smoothing, [m] – size of the Gaussian velocity smoothing window along the vertical
axis (window size is equal to three standard deviations).
Shaping filter type – a waveform correction operator (“rho-filter”), which is supposed to restore the
waveform shapes of seismic reflections after migration (the details can be seen in the manual for
Pre/Post-Stack Kirchhoff Time Migration*). The user can choose to use a 2D or 3D shaping filter,
as well as turn the filter off.
Preferred slope, [deg] – preferred slope of the horizons in the migration result. Preferred slope and
Preferred slope range parameters set up the dip filter for the migration, they attenuate the slopes
which are out of the specified range. This can be used for dealiasing.
Preferred slope range, [deg] – expected range of slopes in the migration result around the preferred
slope.
Max untapered scattering angle, [deg] – the start of the incidence/reflection angle taper. For each
point in the subsurface, the angles between the incident ray from source and vertical and between the
reflected ray going to the receiver and vertical are computed. Using these angles, a taper function is
applied, which can be used to limit the angle ranges participating in migration. Reflections with angles
below Max untapered scattering angle preserve their original amplitude.
Max nonzero scattering angle, [deg] – the end of the incidence/reflection angle taper. Reflections
with angles higher than Max nonzero scattering angle are attenuated completely. Between Max
untapered scattering angle and Max nonzero scattering angle, a smooth taper function is applied.
Weight type – this parameter allows for the selection of weight type for VSP migration. The available
options are:
Apply antialiasing – a switch for source-side dealiasing scheme in migration. A method similar to the
one used by Abma et al. (1999) is applied. The input data is smoothed by triangle filters depending on
the travel time derivatives along the source axis, which keeps only low frequencies for steep events.
This can be used as an alternative or in conjunction with the slope filter set by Preferred slope and
Preferred slope range. The antialiasing strategy turned on by the Apply antialiasing parameter
requires a constant interval between the sources in the data.
Antialiasing shot spacing, [m] – this is the shot spacing used for the antialiasing algorithm. The user
can set this to the nominal shot spacing in the survey to apply the antialiasing similar to Abma (1999).
If the results are not satisfactory, one can increase this value to obtain stronger antialiasing or decrease
it to make antialiasing weaker. Note that antialiasing does not work for surveys with just one shot
point. The algorithm assumes that shot spacing is regular, so it is not recommended to use it in case of
irregular sources.
Apply fold normalization – a switch which turns on the normalization of the migration image by the
migration fold. This needs to be turned off for a conventional migration run. Enabling this option
allows for the amplification of poorly illuminated zones in the migration image.
Mute turning waves – a switch for the muting of turning waves. If turned on, this mutes the rays
which turn from downgoing to upgoing on the way from the receiver/source to the subsurface point. In
general, turning rays rarely contribute to the image (they can improve the imaging of very steep dips).
Muting them can allow one to increase the signal-to-noise ratio of the final image.
Number of threads – sets the number of threads for parallelization. Setting the number of threads to 0
utilizes all available resources.
The module implements 2D depth migration of VSP data. The migration takes a 2D velocity model
and a VSP dataset. The velocity model is used to compute the travel times from each source and
receiver to each point in the subsurface with a fast-marching algorithm (Sethian, 1999). The migration
summation trajectories, the ray incidence angles, as well as the weights are then computed from these
travel times. A few variants of the migration weights are available, with the VSP weights taken from
the work of Dillon (1990).
𝑅𝑠 𝑅𝑟
𝐼𝑚𝑎𝑔𝑒(𝐱) = ∑ (∆𝐿𝑟 √ cos(𝛳𝑟 ) + ∆𝐿𝑠 √ cos(𝛳𝑠 )) √𝑅𝑠 + 𝑅𝑟 𝐷𝑎𝑡𝑎(𝑟̃ , 𝑠̃ , 𝑡̃)
𝑅𝑟 𝑅𝑠
Here, 𝑟̃ and 𝑠̃ are the source and receiver indices for the current trace. 𝑡̃ are the travel times computed
with the fast-marching algorithm. Distances Rs, Rr are the distances (amplitude factors) between the
current source/receiver and each point in the subsurface x, and angles ϴs, ϴr are the angles between the
rays from current source/receiver and normal vectors to the source line/well; 𝑡̃ are the travel time
curves computed by the eikonal algorithm. The ‘offset’ weights set ∆𝐿𝑠 to 0 in the equation above, and
‘walkaway’ weights set ∆𝐿𝑟 to 0.
For the migration of converted waves, the main difference is that the receiver-side travel times, angles
and distances are computed using the S-wave velocity model, while the source side is still computed
using the P-wave velocity model.
The antialiasing algorithm inside the migration computes the travel time derivatives with respect to the
source and uses them for the filtering of migrated arrivals. The algorithm only works if there are
several sources present in the survey. Using the computed derivatives, input data are filtered before
migration with space- and time-variant triangle filters, which pass only low frequencies for events with
steep travel time curves (Abma et al., 1999). The strength of antialiasing filters is set by the
Antialiasing shot spacing parameter. It is suggested to set it to nominal shot spacing first and then
decrease or increase it if the results are unsatisfactory.
‘P-P ghost waves’ mode migrates the reflections which bounce from the subsurface horizons, then
bounce from the Earth’s surface and are then recorded by the receivers of the VSP survey as
downgoing waves (similar to Yu et al., 2022). To use this mode, one needs to supply the downgoing
VSP wavefield as input data (instead of the upgoing wavefield used for conventional P-P reflection
migration). Note that ‘P-P ghost waves’ mode is more sensitive to the errors in the velocity model
since the migrated waves pass through the subsurface several times on their way from the source to the
reflector, then from the reflector to the surface and from the surface to the receiver. This mode operates
under the assumption that the Earth’s surface is flat, which makes it particularly effective for marine
VSP surveys utilizing impulse sources. When setting the source-receiver geometry and the input
velocity model for this mode, one needs to use the Earth’s surface as datum. Up-down deconvolution
influences the migration result for this mode, so it is recommended not to apply up-down
deconvolution before ‘P-P ghost waves’ migration (replace it with a deterministic source wavelet-
based deconvolution, or at least use short up-down deconvolution operators). This mode is mainly
intended for walkaway surveys, although obtaining an offset VSP image with ‘P-P ghost waves’ mode
is also possible.
There is an option for crosswell upgoing and downgoing migration in Wave type. Migration of
crosswell data has significantly different ray coverage when compared to VSP, but the algorithm itself
is similar. In crosswell data, both downgoing and upgoing wavefields can be migrated (Yu et al.,
2008). There is a dedicated mode for each of these migrations – ‘Crosswell downgoing’ and
‘Crosswell upgoing’. The parameters are set just like in VSP migration. After both migration results
are obtained, a combined image can be constructed by summing the upgoing and downgoing results or
subtracting one from the other with Trace Math or Dataset Math modules (the summation or
subtraction needs to be chosen depending on the type of used sensors). It is recommended not to use
any migration weights for crosswell migration, although in some cases other options for weights may
become preferable.
Examples
Below we show a few examples of different migrations which can be conducted with this module. For
each example, we a portion of the input data and the imaging result. Most of these examples are based
on a synthetic seismic dataset computed in the Marmousi model by finite-difference modeling. Note
that the vertical axis in the depth migration results is depth, so the migration results in all the below
examples are shown in depth domain.
Below is an example of a walkaway data migration with this module. We input the separated upgoing
wavefield to the module, select ‘Walkaway’ Weight type, ‘P-P’ Wave type, 2D Shaping Filter type
and a slope filter with Preferred slope range equal to 15 degrees, set up the grid and the velocity
model, as well as create a dataset for the saving of traveltimes. In this synthetic dataset, the well is
located at X=1500.0, and receivers cover the range of depths between 1000 and 1500 m.
To run the migration of the ghost waves on the same data, two changes need to be made. We need to
input the downgoing wavefield instead of the upgoing one, as well as change the Wave type to ‘P-P
ghost waves’. Note that in this example the ghost wave migration is superior to the conventional P-P
reflection result shown above. Ghost waves provide better illumination in many cases. The migration
of ghost waves is, however, more sensitive to the accuracy of the velocity model than conventional
migration. Here, we provide a very accurate model, so the ghost wave migration is successful.
Next, we demonstrate a migration of a walkaway VSP survey similar to the first one, but in this case
the sources follow a more complex topography of the Earth’s surface, and the receivers cover the
whole well starting from the Earth’s surface. The datum is located just above the highest source, so all
the sources have positive values of SOU_ELEV (as the elevation axis in this module points
downwards). The provided velocity model is in agreement with this datum, i.e., the first samples of
velocity traces are exactly on the datum level, and the first sample of the migration result also
corresponds to this datum. When running the migration, we set the same parameters as previously, the
only difference is that we set Mute turning waves to No, as the artefacts which occur from this in the
near surface highlight the source locations.
If the datum was set to be lower than in this example, some sources would have negative SOU_ELEV.
The migration would still be computed successfully due to the extrapolation of the model upwards for
the inclusion of all the sources (the model, however, would need to be edited to agree with the new
datum). The only change is that the migration result would be shifted, as the migration is always
computed starting from the datum level (all the parts of the subsurface above the datum are
disregarded).
For the migration of P-S converted waves, one needs to separate these waves from the data first. For
this example, we create a synthetic dataset by elastic modeling over a modified Marmousi model. We
then take the vertical component of modeled particle velocity and extract the upgoing converted waves
from it using a simple F-K fan filtering procedure. In real-life scenarios, one would extract the upgoing
converted waves by joint analysis of all the recorded wavefield components.
Synthetic data before wavefield separation
After the wavefield separation, the P-S waves are provided as input data to the converted wave
migration. To migrate converted waves, we set ‘P-S’ as Wave type. For this Wave type, the S-wave
velocity model needs to be provided in addition to the P-wave velocity. Here, we set the S-wave
velocity from P-wave velocity using a fixed ratio of 2. The migration result is shown below.
P-S converted wave migration result
This module provides additional functionality for the imaging of the reflected waves recorded during
the crosswell seismic surveys. To demonstrate this, we create a crosswell synthetic dataset with two
wells at X=1250 and X=1750 in the model above and place the receivers in one of the wells and the
sources in another one. We migrate the upgoing and downgoing components of this wave field
separately, using the dedicated Wave type options. We do not use any weights. After computing the
separate upgoing and downgoing migrations, one can sum them (or subtract one from the other,
depending on the sensors used) to obtain the final image.
Raw synthetic crosswell data (left), upgoing wavefield (center), downgoing wavefield (right)
Crosswell migration result (upgoing wavefield)
We also demonstrate the migration results on the field offset VSP dataset which is processed in our
VSP processing tutorial. This time, we import the processed dataset with upgoing wavefield already
separated and migrate it with ‘Offset’ Weight type, Preferred slope range equal to 10 degrees and
Mute turning waves set to No (all the other parameters have default values). We also increase the
Vertical velocity smoothing to 600 m to smooth out the strong velocity contrasts which occur in the
layered models created by Advanced VSP Display.
Processed field offset VSP dataset, upgoing wavefield (left) and migration result (right)
References
Abma, R., Sun, J., & Bernitsas, N. (1999). Antialiasing methods in Kirchhoff
migration. Geophysics, 64(6), 1783-1792.
Dillon, P. B. (1990). A Comparison Between Kirchhoff and GRT Migration on VSP data. Geophysical
Prospecting, 38(7), 757-777.
Yu, G., Marion, B., Bryans, B., Carrillo, P., Wankui, G., Yanming, P., & Fanzhong, K. (2008).
Crosswell seismic imaging for deep gas reservoir characterization. Geophysics, 73(6), B117-B126.
Yu, D., Yang, F., Wen, B., Wang, Y., Huang, D., & Zhao, C. (2022). Gaussian Beam Migration for
Free-Surface Multiples in VSP. Frontiers in Earth Science, 10, 851206.
3C Orientation
This module converts PM-VSP seismograms into PRT system by means of P-component orientation
on the energy maximum in the window.
Parameters
• Window length - the length of window from first arrivals (expressed in ms) in which the
seismogram orientation will be carried out according to data in this window. If the window is too
small it may cause unstable routine operation. If the window is too big besides direct P-wave,
the waves reflected from adjacent boundaries, refracted on them with conversion and other waves
will get in it.
• XY Rotation, YZ Rotation, ZX Rotation - these parameters (expressed in degrees) allow
additional rotation of the coordinate system in respective directions.
Data preparation
The data on first arrival times this module obtains from FBPICK field of traces headers. That is why,
before this module application you write them in. To do this you should:
1. Select the Depth, Depth fields in the Tools/Pick/Pick Headers menu. Then, define the times of
first arrivals and save them as a pick.
2. Start visualization of field data of all three components by indicating the CHAN, DEPTH (it is
better to specify them in this order though it is not compulsory) header fields in the Sort field of
the Trace Input and "*:*" in the string editor below Sort fields. It is as well useful, though it is
not compulsory, to select the Ensemble boundaries flag in the parameters of visualization.
3. Load first arrivals pick (Tools/Pick/Load Pick).
4. Save the pick to file headers (Tools/Pick/Save to headers).
After the pick has been saved to file headers the corresponding file should be saved and registered in the
database. Beside, the coordinates of source and receiver positions should be saved to headers.
In order to enable module operation the input data must be sorted so that the traces corresponding to one
depth were alongside. This can be. For example, achieved by specifying the CHAN, DEPTH (this order
is compulsory) header fields in the Sort field field of the Trace Input and "*:*" in the string editor
below Sort fields.
2С Rotation
This module is used for orientation of any two components of multicomponent VSP data.This module
is used for orientation of any two components of multicomponent VSP data. Data to be input into the
module may be either 2C or 3C and must be sorted by depth and component number. The components
must be numbered as 1, 2 and 3.
Parameters
Angle Parameters
• Save to – if this option is selected, the module outputs the components rotated at the calculated
angle and saves the angle to the corresponding header.
• Load from – if this option is selected, the module does not calculate angles bu simply take them
from a specified header field and rotates the components selected in the Min and Max fields
accordingly.
• Load horizon from. Pick that must be stored in a header field that is specified in the Load
horizon from drop-down list.
• Calculation method– the module employs two methods to determine the component rotation
angle:
▪ RMS Amplitude – root-mean-square amplitudes are calculated over the specified
window for the two components selected. These two values of RMS amplitudes are taken
as a projection of the full amplitude vector and are used to build the vector. Then the
component selected in the Max field is oriented along this vector while the component
selected in the Min field is oriented perpendicular to the vector.
• Window Length– the amplitude values used for rotation angle evaluation are calculated within
the window.
• Window Position relative to Horizon– position relative to the pick.
▪ Symmetric
▪ Below
Components
• Get Components. The header field containing the component numbers may be selected using
the Get Components From option (COMP header field is set by default).
• Min/Max. As a result of running the module, two components specified in the Max and Min
drop-down lists are rotated according to the calculated values while the third component (if any)
remains unchanged.
sym2ort
This module is meant for conversion of 3C data recorded with downhole tool with orthogonal geophones
configuration to data recorded with downhole tool with symmetric geophones configuration and vice
versa.
The only parameter of the module (Type) defines the direction of conversion.
The input data should be sorted by receiver location and, after that, by components (for example,
DEPTH:CHAN - by cable depth and, after that, by channel). It is assumed that the CHAN field contains
the record component (1 - X, 2 - Y, 3 - Z or 1,2 or 3 for symmetric configuration).
Besides, it is always assumed that one receiver location is corresponded by 3 record components.
VSP Geometry*
The module is designed for input of geometry and inclinometry into VSP data. The VSP Geometry* is
a stand-alone module, that is it does not need any additional input/output routines to be in the flow. The
dataset in the project database where the geometry is to be input to is a parameter of the module itself.
The module fills in a standard set of trace header fields needed for VSP processing, as well as input
inclinometry information when needed.
Module parameters
The parameter dialog of the routine contains two tabs: Geometry and Headers.
On the Headers tab specify trace header fields to be read or overwritten during the module operation.
Tab header
Set distance and azimuth – the source X and Y coordinates will be calculated basing on its azimuth
and distance from the collar of the well (the coordinates of the collar of the well are considered to be
0,0):
Azimuth – defined in degrees, minutes and seconds from the north. Any of the three fields can be
decimal. After the dialog is closed and reopened again, the specified azimuth will be recalculated into
the form of degrees, minutes and seconds with decimals. For instance, you can specify the azimuth as
following:
Inclinometry – this flag allows loading inlinometry information for an inclined well.
If it is checked, the REC_X and REC_Y values will be calculated basing on the inlinometry
information read from a specified file (the file format is described in the Appendix 1 below):
Mark duplicated traces – if this flag is checked, the traces with the same value of cable depth
(DEPTH) and of the same component (COMP) will be marked, by 1 written to the user-specified
header filled. For all other traces, this header will be assigned 0.
Except the last of duplicated flag is active only when the Mark duplicated traces is checked. When
this flag is checked, the last one of the duplicated traces is not marked – its marking header is assigned
0. This ensures that at least one trace of each set of duplicated traces is not marked, and thus can be
easily kept while the others can be sorted out on input.
Module operations
The module use the following formulas:
Receiver X = Receiver Y = 0
• With inclinometry, the absolute depth, X and Y of the the receivers are calculated basing on the
interpolated inclinometry information/
It is a tabulated ASCII file with 3 tab or space separated columns: cable depth in meters (DEPTH), vertical angle in degrees
(Andgle), azimuth in degrees (Az). An example of the file is shown below:
Appendix 2. vsp
VSP Data Modeling
This module allows you to create a synthetic gather of transmitted P-waves , refracted converted PS-
waves , reflected non-converted PP-waves , and reflected converted PS-waves , and enter it into the
flow. The problem is solved for a horizontal-layered model at arbitrary position of source and
receivers.
Parameters
Layer model file is a velocity model file. File selection is accomplished in a standard Windows dialog
box after clicking the Browse... button. This should be a text file with spaces or tabs used as
separators. It must contain a column for the depths of the bottoms of all layers (depth should be
expressed in meters) and a compressional wave velocity column (velocity should be expressed in
km/sec). If shear waves are to be taken into consideration there should also be a shear wave velocity
column. In the first line of the table the names of columns should follow after the “~A” symbols. The
depths will be read from the Z column, compressional wave velocity from the Vlay column, and shear
wave velocity from the Vslay column.
~A Z Vlay Vslay
10 1.6 0.8
Receiver geometry file is a file containing the data on sources' location (distance, expressed in
meters), the file structure is similar to that in previous cases but the following names of columns can be
distinguished:
X – source: X coordinate
Y – source: Y coordinate
Z – source: Z coordinate
While calculating, the DEPTH is not taken into account but it is simply entered into the headers of
traces under creation.
• Source X – source X coordinate (m). Source Y – source Y coordinate (m). Source Z – source
Z coordinate (m).
• Generated trace dt - sampling interval of generated traces (msec).
• Generated trace length - length of generated traces (msec).
• Impulse F1, Impulse F2, Impulse F3, Impulse F4 - define amplitude frequency spectrum of
the source wavelet to be used for generation of synthetic traces. The spectrum is built
according to the following rules:
Generated wavelet is zero-phase.
For the nearest shotpoint (SP) you can use an incident wave arrival-time curve. Let us to assume that for
some trace the time of arrival of direct incident wave equals T1 and the time of arrival of the wave
reflected from some boundary equals T2. Then, in case when SP is close enough to the wellhead, the
time, when the wave falls on the boundary equals T3=T1+(T2-T1)/2=(T2+T1)/2. Thus, by using the
incident wave arrival-time curve (having extrapolated it below the borehole bottom either linearly or
with the help of information about velocity law) you can define the depth of this boundary and, of course,
the reflected wave path to the boundary and after reflection: R=Rbefore fallingon the boundary+Rafter reflection.
Input data
The input data for vertical seismic profiling spherical divergence correction is a seismogram of
compressional VSP with filled in header fields:
Parameters
When this module is activated the following window appears:
• Alpha - the degree for R(t) function (see Algorithm), set 1 by default
• Const - constant multiplied on correction function, set 1 by default
• Velocity below borehole bottom - velocity below the borehole bottom (expressed in km/s), set
equal 3 km/s by default
Algorithm
• Pairs of REC_ELEV/FBPICK trace header values are selected for all traces within the whole
frame. Further, these pairs are sorted according to REC_ELEV. The repeated depths are
processed, i.e. if there are several traces with the same REC_ELEV value then in this case the
FBPICK is assumed as mean.
• For every trace the following procedures to be applied:
▪ The following formula to be calculated
𝑡 < 𝐹𝐵𝑃𝐼𝐶𝐾; (𝑅(𝐹𝐵𝑃𝐼𝐶𝐾) × 𝐶𝑜𝑛𝑠𝑡)𝑎
𝐹(𝑡) = {
𝑡 ≤ 𝐹𝐵𝑃𝐼𝐶𝐾; (𝑅(𝑡) × 𝐶𝑜𝑛𝑠𝑡)𝑎
Here:
The module interface consists of two parts: a window for initial parameters setting and a primary running
window that appears during execution of this module. The dialog box for parameters setting can be
opened during module selection, but, in this case, all parameter changes will be active only during the
current module execution session.
Initial data
The initial data for the module should be organized in the following way:
VSP seismogram with filled in fields: REC X, REC Y, REC_ELEV, DEPTH (coordinate and receiver
cable depth), SOU_X, SOU_Y, SOU_ELEV (source position data), FBPICK - transmitted wave arrival
time. Incident wave amplitude (without amplitude divergence correction, etc., but after taking into
account the possible increasing gain factor of recording equipment) calculated via the SSAA module can
be contained in a separate field. The seismogram should be sorted according to increasing depth and
should not contain traces with repeating receiver depth.
Well-logging data can be transmitted in an ASCII text table file with the following structure:
~A DEPTH N1 N2 N3
In the DEPTH column, the cable depth is indicated. In the rest of the columns the data of some logging
methods are indicated, from which any two columns can be selected.
Parameters
• Logging data (LAS) file – Name of the file with logging curves. To select a file, click the
Browse... button.
• LAS column name(s) – the list of selected columns. Here, you can access the list of columns
available in the file by clicking the Edit button and, in the following dialog window move the
name of maximum two columns from the Available window to the Added window.
• Load model file - a file with initial velocity model. A new one will be constructed on the basis
of this model. The file format is the same as that of the file required for the VSP Data Modeling
module, however, only the Z column is needed. By clicking the Browse... button the user can
select a file in a standard dialog.
• Save model file – the file to which the obtained finite velocity model will be recorded. The
format is similar to that in the previous case. The Z, DEPTH, Vlay, Vmean columns contain
receiver depth values starting with what is assumed to be zero cable depth, layer velocity and
mean velocity, respectively.
• Start Z (m), End Z (m) - define depth intervals (height scale) to be visualized in the window
(all data can be viewed by scrolling).
• Altitude correction (m) wellhead altitude.
• Start time (ms), End time (ms) - define time interval for VSP data (horizontal scale) to be
visualized in the window (all data can be viewed by scrolling).
• Trace scale - amplitude multiplier used during visualization.
• Trace step (m) - depth step with which the output traces are to be selected.
• Interval velocity calc. base - base (in traces) for interval velocities calculation.
• Regularity parameter - visualization parameter used during velocity module calculation by
layer stripping method. The increasing of this parameter results in obtaining smoother velocity
values but less fitting to initial data on transmitted waves' time of arrival.
• Attenuation/Get amplitudes from - traces header field from which the incident wave
amplitude will be obtained during absorption calculation.
When starting execution of the flow containing Advanced VSP Display module the window similar to
one shown on the figure will appear in the case where parameters have been specified correctly:
The window is divided into 5 parts and contains the following sections (from left to right):
• depth (cable, if L is indicated in the header or true, if Z is indicated in the header); scale;
• the set of Logging curves selected by the user (this column may be lacking);
• the section with charts of measured amplitudes (Amp) and calculated mechanical quality
(Q);
• the column with velocity (V) charts;
• and the main window with VSP seismogram put on vertical from reflected waves arrival-
time curves and duplicated reduced arrival-time curve
• The logging curves are displayed by the same color as their names are displayed in the
upper part of the column (red and green).
In the status bar the values of two-way traveltime and depths for current cursor position are shown. In
addition, for the layer, where the cursor is placed, elastic wave velocity and quality values are displayed.
Menu
Export results - allows the export of layer velocity model (Layer model file) and the per trace
velocity model (Per-trace file) into ASCII files. The formats of the files is described in the NOTE box
below.
First line starts with ~A. Then this line defines the column names. The following columns can be present:
Z – vertical depth of the layer bottom in meters; Depth – cable depth of the layer bottom in meters, Vlay – velocity within
the layer, estimated as the slope of a straight line fitting a set of first arrival time-curve values within the layer, Sigma –
r.m.s. deviation of the time-curve values from the straigt line; Q – the Q-factor. If the well is vertical, the columns Vlay1 an
Sigma2 replicate the Vlay and Sigma, respectively. Otherwise, Vlay1contains the velocities estimated by layer stripping,
while Sigma2 contains r.m.s. deviation of the time-curve values from a straigt line corresponding to Vlay1 velocity.
First line starts with ~A. Then this line defines the column names. The following columns can be present:
Z – vertical depth of the current trace in meters; Depth – cable depth of of the current trace in meters; FBPICK - first break
arrival time (ms); T – two-way vertical travel time calculated from the first break (ms), T1 - two-way vertical travel time
calculated from the velocity model (ms); Vint – interval velocity (km/s), Vmean – mean velocity (km/s).
• Parameters - allows setting up module parameters. The window for parameters setting - VSP
Display Parameters - will open (see Module parameters chapter).
• Layer Velocity - the menu providing for layer velocity calculation management by means of
layered stripping. It contains two submenus:
• Recalculate - when selecting this command the layer model is recalculated taking the
refraction into account;
• Scanning parameters - this command opens the window for velocity scanning parameters
setting:
Here:
Start velocity, End velocity - start and end velocities in the model (expressed in km/s)
Axis parameters - when selecting this command the window for adjusting the dynamic range of axes
displayed on logging curves, will open
Quality display - the menu for Q -factor calculation contains one submenu:
Frequency - selection of this command allows the user to specify the central frequency used for
estimation of Q (Hz) calculation
Amplitude - the menu for amplitude displaying parameters changing contains the following submenus
Scale type - enables scale type selection between logarithmic type and linear one
Show - enables displayed curves selection: Initial amplitude - the initial values, with SDC correction
- with spherical divergence correction, True amplitude - with divergence and boundary passing
correction
Z scale - the menu allows depth scale selection (Z - vertical, Depth - cable).
Depth, time, parameter value axes are the elements of management that allow changing the
corresponding scale. To do this, click the left mouse button (MB1) on the start axis value and, holding
it down, move the mouse to the position of the end axis value and release the button. To return to the
initial scale on the selected axis, click on the corresponding axis with the right mouse button (MB2).
In order to add the layer boundary, click by the left mouse button (MB1) on the seismogram, where
the boundary to be added.
In order to move the layer boundary, click the left mouse button (MB1) to the new position and then
release the button.
In order to delete the boundary, double-click by the right mouse button (MB2) on the boundary.
While seismic wave propagation a part of energy is spent on inelastic deformations of medium -is
absorbed. It is considered that wave amplitude decreasing caused by absorption can be described by
the formula (Aki, Richards, 1983).
c is phase velocity,
w – circular frequency,
A – amplitude,
It should be noted that in our case a spatial parameter Q is used whereas in most of similar tasks the
time parameter is used.
In order to simplify the task it is assumed that the Q parameter does not depend on frequency. As a
rule, within seismic range of frequencies such assumption is quite acceptable and does not result in
considerable errors.
Highly absorbent mediums, evidently, correspond to low Q values. In most of the cases the Q value is
between 50 and 300 (Hatton, and others, 1989).
There are two absolutely different Q parameter evaluation techniques. The first one involves
investigation of amplitude decreasing degree throughout the borehole taking divergence and
transmission into account. The second one involves analysis of frequency spectra of incident waves.
In the module the Q parameter evaluation is accomplished from reflection amplitude variation.
From (1) it follows that . It means that if the signal is harmonic with
some frequency w, then, when dividing the curve of logarithmic amplitude of a signal on approximately
linear sections and approximating them by line segments (via the smallest squares method - the same as
for layer velocities) and knowing the velocities, you could obtain the Q values. Since the real seismic
signal, as a rule, is quasi-harmonic with pronounced central frequency, the rough estimation of the
intrinsic attenuation can be obtained for it via the same method by substituting w by the central frequency
of a signal.
This is how the acoustic quality is calculated in the module. During calculation, the amplitude values
corrected for spherical divergence and transmission are used. The central frequency used while Q
calculation is specified by the user in explicit form through the Quality display/Frequency menu.
Surface-consistent source correction can be carried out using reference seismometer data. If the wavelet
does not change much from excitation to excitation, then you can read the amplitudes of, for example,
some pronounced phase of the direct wave, normalize them, and divide the VSP seismogram traces by
the respective numbers. Otherwise, if the wavelet changes significantly, the surface-consistent source
correction can be performed by means of deconvolution).
Surface-consistent receiver correction must at least take into account recording equipment gain
coefficient variations during the acquisition. Besides that, while working with a multimodule instrument
you must take into account that different modules in the instrument often differ distinctly in sensitivity.
It is convenient to do corresponding corrections during the general preprocessing in a data processing
flow.
VSP NMO
This module allows application of normal moveout corrections to VSP data obtained for horizontal
layered model in deviated wells with offset source position.
Theory
Obviously, while observations the VSP position of elastic waves source and receiver almost always do
not coincide. When CDP processing by seismic reflection method you always trie to obtain the section,
which would be registered while source-receiver pair moving along the surface corresponding to the
reference level. To do this at a certain stage, apply Normal Moveout Correction routine. Due to the fact
that VSP observations are often applied for CDP data depth tie, it would be useful to transform VSP data
before tie what will result in that every trace of vertical profile registered at a certain source-receiver
position will obtain a form which it could have had if the source and the receiver were united and were
at a certain level. This module transforms every trace to the form which is would have had if the source
and receiver have been at position defined by the used (source elevation, receiver elevation, source to
receiver horizontal distance).
The input data for the module is: VSP seismogram with given geometry (SOU_X, SOU_Y, SOU_ELEV,
REC_X, REC_Y, REC_ELEV fields) and one-dimensional lay velocity model. As a result of transforms
applied to the seismogram traces individually you obtain the seismogram corrected for normal moveout.
Module parameters
Algorithm
For corrections calculation and application, the following transform should be applied. The model is
treated as horizontally-layered. This means that the medium is assumed as a half-space restricted by
horizontal plane (the half-space is below the boundary) and divided on layers with boundaries parallel
to half-space boundary. Elastic wave propagation velocity (only one velocity is important in this task)
inside the layers is constant.
Let's assume that the source is in the point with SOU_X, SOU_Y, SOU_ELEV coordinates and the
receiver is in the point with REC_X, REC_Y, REC_ELEV coordinates. In this case the obtained trace
must correspond to the source and receiver position: SOU_ELEV_new, REC_ELEV_new,
SOU_ELEV_dist (source to receiver horizontal distance).
Having started with the greatest depth of the source and receiver we will move downward with a
specified step. For every obtained trace we calculate the time require to the ray to cover the distance
between the source and receiver after reflecting from the boundary situated on this depth. In case if the
ray can not cover the distance from source to receiver after reflecting from the desired depth (total
internal reflection) the corresponding amplitude on the trace will be muted.
This algorithm is applied to old and new position of source and receiver. As a result we obtain two time
sequences corresponding to certain depths. After that, by means of interpolation we can obtain the trace
corresponding to CDP trace.
Hodogram Analysis
This module is used to calculate rotation angles for multi-component VSP data.
Procedure
• 2D or 3D data sorted by two fields – depth and component numbering header – are input into the
module. The component numbers in the headers must be within the range of 1 to 3.
• In the Component header field, select the header containing the component numbers. Specify the
component numbers that will contain the minimum and maximum output amplitudes in the Minimum
component and Maximum component fields, respectively.
• Set the rotation angle calculation parameters – time window size and the pick relative to which
this window will be positioned on the traces. The initial angle values may be loaded from a header (Load
rotation angle).
• Specify the header where the module output will be saved (Output angles header).
• The output can contain both the source traces and the rotated traces. To rotate the traces by the
required angle, check the Output rotated traces box.
• The main interactive window of the module will appear after the workflow is launched (Run).
• The angles are calculated for each depth contained in the data. Specify the target depth and
define the time interval in which the amplitudes will be processed in the right window pane.
You can calculate the rotation angle for the current depth within the specified time window or
for all depths at the same time (using the corresponding buttons).
• The arrow in the left window pane shows the coordinate rotation angle. You can set a
custom angle by rotating the arrow with the mouse in the required direction.
• The rotation result will be shown on the right, in the two bottom panes containing the traces.
The first pane shows the trace with the re-calculated maximum amplitude, and the second one
shows the resulting trace with the minimum values after rotation.
• You don’t need to save the angles after they are calculated – just close the window, and the
results will be output to the header selected at the beginning of the procedure. To save the
resulting values to a dataset, use the Trace Output module
Module parameters
The numbers of components that will contain the minimum and maximum output amplitudes are
specified in the Minimum component and Maximum component fields, respectively.
Output angles header – header where the resulting angles (radians or degrees) will be saved.
The Output rotated traces checkbox allows applying the calculated angle values to the traces to rotate
the traces.
Working with the module
When the workflow is launched (Run), a window appears consisting of two working panes, the menu
bar and the graph of angle values for each depth:
The right pane displays the traces for the two selected components corresponding to the current depth
and the resulting traces after the rotation. For the given time, the first component amplitudes are the X
coordinate for the left window, and the second component amplitudes are the Y coordinate. The position
of the current time window on the trace is also displayed here. You can change the window size and
position manually. To do this, place the mouse cursor over the start time, press the left mouse button,
select the target interval, and release the button.
The borehole diagram showing the rotation angles from -180 to 180 for each depth is displayed at the
far right of the window.
The left pane displays a set of points on the XY coordinate plane within time interval selected in the
right pane corresponding to the amplitudes of the selected trace segment. The points are connected with
lines. Each point represents a pair of coordinates (x, y), were x is the first component coordinate and у
is the second component coordinate. The color palette matches the selected time interval.
Menu and toolbar
The menu contains the same commands as the toolbar. Therefore, they are only described once.
Rotate current / Rotate all – the first two buttons are used to recalculate the rotation angle
by setting an angle that corresponds to the furthest point from the origin of coordinates (maximum value
of the root of the sum of squares of the first and second component amplitudes). The first button performs
recalculation for the current depth (2 components), the second one – for all depths.
-Zoom in/out – this tool allows changing the zoom level. Select the appropriate option: the
first button zooms in to the image, the second one zooms out of the image. If the Zoom in
command is selected, hold down the left mouse button and select a rectangular area to zoom in to. If the
Zoom out command is selected, left-clicking the image will revert it to the previous zoom level.
Export to TXT. This button opens a dialog box displaying the calculated rotation angles for all
depths (in radians). You can save the angles and the depths to a text file.
– allows scrolling through the depth values.
There are two ways to switch to the target depth – using the slider or entering the value manually.
QC (quality control)
Ensemble QC
Ensemble QC Compute is designed for the computation of the main parameters that allow estimating
the quality of seismic data in a certain time window and limited by certain offset range.
A traces ensemble is input to the module, the header field OFFSET contains the offset value. The output
data represent parameters of seismic record, which are saved in header fields defined by user. If there
are some reasons of failure to determine parameters, the header fields will contain value -1.0. The module
fills the indicated header fields of all ensemble traces, not only those that fall within the given offset
range.
The module allows calculating an average absolute (or mean square) amplitude in a window, signal/noise
ratio, estimation of the record resolution and apparent signal frequency.
• Polygonal - the window is defined as a polygon that the user selects from the database when
pressing the Select object... ( ) button. Besides the selection from the database, it is possible
to specify the directory where the desired polygon is located with the Select location... ( )
button. After the directory is selected, the name of the polygon is put in the Polygon line either
directly or with the help of the replica system (see the Replica system section).
• Square – the window has a rectangular shape:
▪ Min/Max offset – minimum and maximum offset
▪ Min/Max time – minimum and maximum time.
The module itself chooses the data that falls into the window from the data processed in the running
flow, so there is no need to limit the offset range when sorting (of course, you should not set it narrower
than the range, within which you need to get estimates).
Skip bad traces if - disregard individual traces whose header does not meet the specified condition
when calculating ensemble attributes.
(Ai,j . . jth. – sample of ith trace), N . trace number in window, T samples number in window)
• 1D RMS, the root mean square amplitude will be calculated for each trace, simple average of
that kind of trace estimation will be used for the evaluation.
𝑁 𝑇
1 1
𝐴1𝐷 𝑅𝑀𝑆 = ∑ √ ∑ 𝐴2𝑖𝑗
𝑁 𝑇
𝑖=1 𝑗=1
• Trace Header, trace header name to save the window amplitude estimation
Proccessing mode group:
Signal / Noise ratio group – allows to switch on the calculation of signal to noise ratio:
• Output trace header – select trace header field to output the value.
• Min/Max freq, minimum and maximum frequency in the range, which is used for signal-to-
noise ratio calculation
• Max shift, maximum shift between the traces where we find the maximum of cross-correlation
function
• Method, mode of usage:
▪ Normal cross-correlation functions are calculated inside the ensemble between the
neighboring traces as well as autocorrelation for each trace, mean cross-correlation
function and autocorrelation functions are calculated. It is assumed that averaged cross-
correlation function autocorrelation function of a signal, while autocorrelation is
autocorrelation of signal + noise. Then module calculates a spectrum of averaged cross
correlation function SCCF(f) and autocorrelation SACF(f), a signal to noise ratio is
determined from the formula:
𝑓𝑚𝑎𝑥
𝑆⁄ = ∑ 𝑆𝐶𝐶𝐹 (𝑓)
𝑁 𝑆𝐴𝐶𝐹 (𝑓) − 𝑆𝐶𝐶𝐹 (𝑓)
𝑓=𝑓𝑚𝑖𝑛
▪ Use model trace differs from the Normal mode, by the methods of calculation: the
pairwise correlation is calculated between ensemble traces and a model trace, obtained
by averaging of all ensemble traces.
▪ Treat model trace as signal. In this mode we treat model trace as signal free of noise.
Its autocorrelation function is considered to the autocorrelation function of a signal.
Resolution calculation group – estimation of data resolution power. The estimation is carried out
using the formula:
1 ∑𝑇 2
𝑖=1 𝐴𝑖
𝜆= × ∑𝑁 2 ,
2𝑞 𝑖=𝑞 𝐴𝑖
where 2q - is the width of the principal half-period of the autocorrelation or cross correlation functions,
Ai samples of autocorrelation function.
• Output trace header – the trace header field to which the value will be written (the value will
be in Hz)
• Max time of ACF to use – duration of autocorrelation function, that can be used for resolution
estimation.
• Method:
▪ Use mean ACF uses averaged autocorrelation function
▪ Use mean CCF uses averaged cross correlation function
▪ Use separate CCFs uses cross correlation function of separate traces, the obtained
estimation have to be averaged
• Normalize CF normalizes correlation function using averaging (this parameters impact as well
on the apparent frequency estimation, if the it is calculated from the correlation distance)
• Apparent frequency output header name – the trace header field to which the value will be
written (the value will be in Hz)
• Method:
▪ Number of sign changes apparent frequency estimation on the basis of zero crossing
number. Calculated for each trace using the formula, the values are averaged within the
window.
𝑁𝑍𝐶 − 1
𝑓𝑍𝐶 =
2(𝑡𝑙𝑎𝑠𝑡 − 𝑡𝑓𝑖𝑟𝑠𝑡 )
▪ ACF. Apparent frequency estimation on the basis of ACF main half period width.
1
Calculated for each trace using the formula, 𝑓𝐴𝐶𝐹 = 2𝑞 the values are averaged within
the window 2q
▪ Mean ACF. The same as the previous mode, however in this mode the program uses
averaged ACF. If the Normalize CF mode is on, then the ACF are normalized.
Bandwidth calculation parameter group - allows to choose whether to calculate the spectral
bandwidth:
• Averaging method:
▪ Average amplitude spectra - the option that determines the value for each trace first, and
then calculates the average value.
▪ Average integral values - the option that first calculates the average spectral curve
(average amplitude spectra for all traces), and then determines the spectral bandwidth for
it
• Minimum window length – enter the value of the window length in counts, in which the
spectrum bandwidth will be calculated.
• Band width output header name – the header where the obtained value will be written.
• Band width calculation mode - spectrum bandwidth calculation mode:
▪ At - the spectrum bandwidth is calculated on an amplitude equal to the specified number
of percent/decibel (Units) of the maximum amplitude (Peak amplitude).
▪ Amplitude spectra integral divided by maximum magnitude – the ratio of the area
under the spectrum to the maximum amplitude
The Peak frequency parameter group - allows selecting whether the maximum value of the
spectrum is to be calculated:
• Peak frequency output header name – the header where the obtained value will be written.
• Averaging method:
▪ Average amplitude spectra - the option that calculates the spectrum of each trace and
determines its peak. The obtained values are averaged and written to the specified header.
▪ Average integral values - the option where the spectrum is calculated for each trace, then
their average is calculated. The maximum is determined on the basis of the obtained
average spectral curve.
In order to increase productivity and calculation speed, you can enable the option to parallelize
calculations. To do that, set the number of cores used by the module (Number of threads).
References
1. Katz, Ptetsov S.N. Spectral analysis of regular signal and noise field// Physika Zemli, 1978, №1
(in Russian)
2. Hardy H.H., Beier R.A., Gaston D.J., 2003, Frequency estimates of seismic traces
//Geophysics, 68, no. 1, pp. 370-380
Correlation function compute
The module is designed for the computation of various correlation functions estimation, for the
indicated offset and time ranges. The trace ensembles are input to the module with the header field
OFFSET containing the offset value. As a result, each ensemble is replaced by an averaged ACF or
CCF. Moreover, the output traces headers contain the signal to noise ratio estimation.
Parameters:
The module picks the data falling within the window from the data that are processed in the flow, that
why there is no need to limit the offset range while sorting (it is well understood that it should not be
narrower than the estimation range).
General fold count is overall trace number in the ensemble (bin), the effective - is the number of
nonempty offset -bins in the ensemble (bin). Traces inside the ensemble are distributed by groups in
accordance with offset for the calculation of the effective fold count. Offset range falling within one
group is set by the Range parameter, the groups do not overlap, there is no gap between the groups.
Parameters:
• Min/Max offset– minimum and maximum offsets
• Range – offset range, falling into one offset-bin
• Min/Max azimuth – minimum and maximum azimuths (in degrees, Y-axis is directed to the
North)
• CDP Fold – header field of effective fold count
• Total fold – header field of general fold cover
Apparent velocity calculation
This module is designed for calculation of apparent velocity of direct wave from the arrival times inside
the trace ensembles. There are two methods of calculation : mean velocity calculation and approximation
of direct waves arrival using the least squares method. The resultant value, obtained from the calculation
is output to the header field of ALL ensemble traces indicated by the user
Parameters:
• Offset– header field, containing source to receiver distance
• Time– header field, containing arrival times
• Velocity – header field, containing velocity value
• Min/Max offset – minimum and maximum offsets (the same header field indicated in Offset
parameter is used), that are allowed to use for velocity estimation.
• Mode:
▪ In Mean mode velocity computation is carried out using the formula:
𝑁
1 𝑂𝑓𝑓𝑠𝑒𝑡𝑖
𝑣𝑒𝑙𝑜𝑐𝑖𝑡𝑦 = ∑
𝑁 𝑇𝑖𝑚𝑒𝑖
𝑖=1
(here S(f) – spectrum in time window, where the frequency estimation is carried out, f –
frequency, fN – Nyquist frequency).
• Apparent frequency. Apparent frequency, acquired from the estimation of number of zero
crossings:
𝑁𝑍𝐶 − 1
𝑓𝑁𝑍 =
2(𝑡𝑙𝑎𝑠𝑡 − 𝑡𝑓𝑖𝑟𝑠𝑡 )
(Here NZC – number of zero crossing, tlast – time mark of the last zero crossing, tfirst – time mark
of the first zero crossing)
Trace header field is activated when you select an attribute; in this field you have to indicate header to
which the calculated attribute will be saved.
Horizon parameters with respect to which the attributes calculation window will be positioned are set in
the Horizon parameter.
• Specify Pick. Specify the horizon from the database. To do so activate the option, click the
button Select…, then choose the right file from the pop up window.
• Specify header field. Specify the horizon from the header, where is has been saved to.
Activate the option, click the button Browse, you have to select the needed header in the dialog
window.
• Specify text. Specify the horizon manually in the field Specify.
CDP
100 2250
IN THIS EXAMPLE:
CDP – header field, by which the horizon is specified;
100 – header field value (in this case the number of CDP),
You have to indicate the length of time window in ms to calculate seismic attributes in the field Window
length. With respect to the given horizon the time window can be:
• Symmetric
• Above
• Below .
Save template and Load template buttons are designed for saving current parameters of module in
the template of project database and loading parameters from the previously saved template,
correspondingly.
CrossPlot (Tied crossplots)
This module is used to analyze interrelations between header fields using crossplots and histograms.
A crossplot is built based on a pair of seismic trace header fields, with each point on the crossplot
corresponding to one trace. A histogram shows distribution of a single seismic trace header across value
ranges; the height of each column corresponding to a particular range is determined by the number of
traces with the selected header field values falling within that range.
The combination of all crossplots/histograms tied to one dataset together with all their parameters is
called a crossplot collection. The collection is stored in the project as a database object. Therefore, the
module works with a crossplot collection.
The user can select a so-called active crossplot (or histogram) which allows interactive highlighting of
various areas of interest with different colors. Highlighting an area on a crossplot is done by defining a
polygon enclosing that area; on a histogram, it is done by specifying the corresponding interval. When
the polygons are specified on the active crossplot, the points on all other crossplots assume the colors of
polygons within which the traces corresponding to those points fall on the active crossplot. Similarly,
when the intervals are specified on the active histogram, the points on all other crossplots within the
collection assume the colors of intervals containing the corresponding traces.
If necessary, a topographic base can be used as the crossplot background. The module also allows
printing arbitrary sets of crossplots and histograms with added titles and legends.
CrossPlot is a so-called standalone module, i.e. a module that generates the flow independently.
Therefore, the module should be the only one in the flow, with no presence of additional modules
required.
Parameters
When the module is added into the flow, the CrossPlot Parameters dialog box appears:
• Get trace headers from dataset– the header fields that will be used to build the crossplots
(histograms) may be selected both from the project database
• Get trace headers from ASCII file – the header fields that will be used to build the crossplots
(histograms) may be selected both from from an ASCII file.
• Crossplot collection path– specify the name under which the crossplot collection will be saved
to the database or select an existing name. If you leave this field empty, a collection object will
be created under the current flow name by default.
Instead of using all traces from the dataset to analyze interconnections between the header fields,
you can choose to use only those that fall within a certain range. The range is defined by two
reference header fields.
• First Reference Header/Second Reference Header– select the reference header fields from the
list and specify the corresponding value ranges for them (much in the same way as header field
sorting is specified in the Sort Fields field of the Trace Input module). Defining the ranges by
reference header fields allows analyzing the headers of only those traces that fall within the
selected range instead of all traces in the dataset.
When you are done setting the module parameters, press OK.
Working with the CrossPlot Manager
Press Run to start execution of the flow. The CrossPlot Manager window will open. A list of objects
(crossplots and histograms) existing in the collection will be shown on the left side of the window.
Visible objects, i.e. crossplots and histograms currently displayed on the screen, will be checked. You
can toggle object visibility by left-clicking the corresponding checkboxes. The currently active
crossplot/histogram will be highlighted in red.
The Show all button allows making all crossplots in the list visible. The Hide all button hides all
crossplots.
Parameters:
• New Crossplot... – create a new crossplot/histogram.
• Edit Crossplot... – edit the parameters of the crossplot/histogram selected in the list.
• Delete Crossplot... – delete the selected crossplot/histogram from the list.
• Canvas – create a “canvas” with a set of crossplots and histograms as well as additional
information for printing.
• Save – save the current crossplot and histogram set.
• Exit – exit from the module. When this button is pressed, the program will prompt you to save
the crossplot and histogram set before exiting.
Creating a new crossplot/histogram
Press the New Crossplot... button in the CrossPlot Manager dialog box. The following new
crossplot/histogram setup dialog box will appear:
• First header (X axis), Second header (Y axis) – header fields that will be used to build the
crossplot/histogram (if a histogram is to be generated, only one header field will be available).
• Histogram – a histogram will be generated if this option is enabled; otherwise a crossplot will
be generated.
• Point properties / Histogram Color – display parameters for crossplot points/histogram
columns.
• Number of Columns and Column Width. When a histogram is built, additional histogram
parameters become available. You can specify either the number of columns (Number of
Columns) into which the entire range of the selected header field values will be broken down,
or the histogram column width (Column Width).
If a crossplot is to be generated, the point properties dialog box will look like this:
• Radius – non-negative number defining the crossplot point display method. 0 – point size is 1
pixel, 1 – points are displayed as crosses, 2 and more – points are displayed as circles with the
specified radius in pixels.
– the field is used to select the crossplot point color.
If a histogram is to be generated, only the histogram column display color selection field will be
available.
After a new object is created, its window immediately appears on the screen. An example of a
crossplot window is shown below:
Crossplot/histogram window
This is what the window’s main menu and panel look like:
Here we will list all available commands and provide a short description for each one. The most
important commands will be discussed in greater detail below.
• Active plot – select the current crossplot/histogram as active. The same is achieved by pressing
the button on the toolbar (see the Working with active crossplots/histograms section).
• Load Background Image – load a bitmap background image.
• Unload Background Image – unload a loaded background image.
• (These commands are discussed in detail in the Loading and unloading bitmap background
images section).
• Export Image – export the crossplot/histogram to a bitmap image.
• Print... – print the current crossplot/histogram.
• Close – close the current window and save its parameters.
Select–commands from this menu are available only if the current crossplot/histogram is selected as
active. They allow working with crossplot polygons or histogram intervals. (These commands are
discussed in detail in the Working with active crossplots/histograms section).
• New Polygon/New Interval – create a new crossplot polygon/histogram interval. The same is
NOTE In mathematical statistics, a quantile is a number which may not be exceeded by a specified
random variable only with a fixed probability.
View – this menu is used to set up the crossplot/histogram display window parameters:
Zoom This menu contains the following options active only for the crossplot window:
• Set axis ratio 1:1 – set the same scale on the X and Y axes (recommended to use for header
fields of equal dimension, such as X and Y coordinates). The same is achieved by pressing the
You can achieve the same result by selecting the relevant section on the horizontal or vertical ruler
located at the top or left edge of the visualization window, respectively. To do this, press and hold down
the left mouse button at the start of the section on the ruler, drag the cursor to the end of the section, and
release the mouse button. The range selected on one of the rulers will be scaled up to full-screen size
(the scale on the other ruler will remain unchanged).
To revert to the original scale (i.e. to make the entire image area fit into the view window), doubleclick
the right mouse button in the crossplot/histogram window.
To set up the axis display parameters, double-click the left mouse button on the axis. The following
dialog box will appear:
Autoscale – when this option is enabled, the axis scale is determined automatically based on the axis
length and the header field value variation range.
• Step – tick interval in units corresponding to the header field. This field is available only if the
Autoscale option is disabled.
• Tick length (mm) – tick length in mm.
• Show values – show/do not show scale values.
• Show grid lines – show/do not show grid lines.
• Number per primary – number of minor scale ticks per primary tick.
• Tick length (mm)– tick length in mm.
• Show values– show/do not show scale values.
• Show grid lines– show/do not show grid lines.
Scale font – this button opens a standard dialog box allowing you to select the axis tick label font
parameters.
Title font – this button opens a standard dialog box allowing you to select the axis title font parameters.
Number format – opens the Number format dialog box where you can manually set the display rules
for axis titles.
If the Manual Set box is unchecked, you can specify the end of the fist quantile and beginning of the
last quantile (Use Margins).
The left part of the dialog box contains the list of all headers in the dataset, the right part – the list of
those headers whose values will be displayed in the status bar. Press Add→ to add headers to the list on
the right; press ←Remove to remove headers from that list. When you are done selecting the headers,
press OK. Now when you hover the mouse cursor over a point on the crossplot, you can see the values
of the header fields from the list assigned to the currently selected point in the status bar. An example is
shown below:
When the File/Load Background Image command is selected, the following dialog box appears:
Select the bitmap background image in the Background file field using the Browse... button. The
following formats are supported: BMP, JPEG, GIF, TIFF, PNG. Set the edge coordinates of the image
being loaded in the Left X, Right X, Top Y and Bottom Y fields: left and right edge along the X axis
and upper and lower edge along the Y axis. By default these fields contain coordinates matching the
minimum and maximum values on the X and Y axes on the crossplot. In the example shown below an
aerial photograph is used as the background image.
You can change the background corner coordinates at any time by re-opening the Load Background
dialog box.
To unload an existing background image, select the Unload Background Image menu item.
Information on the image size in mm, resolution in pixels and file size in bytes (uncompressed) will be
displayed on the right side of the dialog box.
When you are done setting the parameters, press Next and select the filename and format when
prompted by a dialog box.
The left part of the window is the print preview area. The crossplot boundaries are shown in short dashes,
the sheet boundaries – in long dashes.
The current printer settings are shown in the text area in the lower right part of the dialog box.
Working with active crossplots/histograms
An active crossplot (or histogram) is a user-selected crossplot/histogram on which polygons (intervals)
determining the colors of points on other crossplots can be defined.
Only one crossplot/histogram can be active at any point of time. However, you can switch between
several crossplots, making them active one at a time. In this case each crossplot will have its own set of
polygons, as can be seen from the pictures below which show one and the same pair of crossplots twice
– first with the right crossplot as the active one, and then with the left crossplot in that role:
• By pressing the button on the toolbar or selecting the File/Active plot menu item in the
crossplot/histogram window.
• By double-clicking the right mouse button on the crossplot/histogram name in the list in the
Crossplot Manager window.
When a crossplot becomes active, the Polygons Manager window appears, allowing you to work with
polygons/intervals.
The Polygons Manager window allows creating polygons/intervals on the active crossplot/histogram.
It lists the names of created polygons or intervals. To select a polygon, click its name in the list. After
that you can edit the selected polygon, change its properties and location relative to other polygons, or
delete it.
When a polygon/interval is defined on the active crossplot/histogram, the points on all other crossplots
corresponding to seismograms that fall within the specific polygon/interval assume the color of that
polygon/interval. If polygons intersect, the first polygon on the list in the Polygons manager window
will be shown on top of all other polygons on the active crossplot, and its color will be assigned to all
points within the intersection area.
To move the selected polygon/interval up or down relative to other polygons/intervals, use the following
buttons:
• Top – move the selected object to the top line of the list.
• Move up – move the selected object one line up.
• Move Down – move the selected object one line down.
• Bottom – move the selected object to the bottom line of the list.
The window’s other buttons allow creating new polygons (New Polygon), editing the selected polygon
boundaries (Edit Active Polygon), deleting the selected polygon (Delete Active Polygon) or deleting
all polygons from the list (Delete All). A more detailed description of the main functions available when
working with polygons/intervals is presented below.
To create a new polygon, press the New Polygon button in the Polygons Manager window. You can
also do it by using the Select/New Poligon(Interval's) menu item in the active crossplot window or
pressing the button on its toolbar. When you select this command, the Polygon Properties dialog
box will appear:
Specify the polygon name in the Name field. (by default, the polygon_N name is assigned).
• Left bound – this field is used to set the interval’s left boundary value in the current horizontal
axis measurement units.
• Right bound – this field is used to set the interval’s right boundary value in the current horizontal
axis measurement units.
• Poligon's transparency – sets the polygon/interval transparency. The extreme right position of
the slider corresponds to a fully opaque image, the extreme left – to a fully transparent one (in
the latter case non-active polygons/intervals are not shown at all, and the active polygon/interval
has only its outline shown).
After creating a new polygon, draw it in the active crossplot window by adding points with single clicks
of the left mouse button. To close the path (complete the polygon), double-click the left mouse button.
To create a new interval in the active histogram window, select it by pressing the left mouse button on
one of the interval’s boundaries, dragging the cursor to the other boundary while holding the mouse
button down, and releasing the mouse button.
This will open the already familiar Polygon Properties dialog box described above. The same can be
achieved by double-clicking the left mouse button on the polygon/interval name in the list in the
Polygons Manager window.
• In the editing mode you can move an active polygon node by holding and dragging it with the
right mouse button.
• To add a new node, single-click the left mouse button.
• To delete an active polygon node, double-click the right mouse button on that node.
• You can also move the entire polygon around using the right mouse button while holding down
the Shift key.
To delete all polygons/intervals on the active crossplot/histogram, press the Delete All button.
Selecting this command will open the Histogram Range Colors dialog box which will show the
histogram broken down into 7 quantiles by default. The quantiles are calculated based on the assumption
of Gaussian distribution of the header field values, although this may not actually be the case. The slider
allows adjusting the shape of the Gaussian curve.
The dialog box is divided into two parts: the left one contains a graphic representation of automatic
splitting into intervals, while the right one displays the current list of intervals to be created with the
right boundary of each interval shown in percent.
To split the histogram into uniform linear intervals, select the Use linear intervals option.
You can specify an arbitrary number of quantiles/linear intervals in the Number of classes field (to
update the interval diagram, click the mouse anywhere within the dialog box except for the field itself).
The right part of the window contains the list of quantiles and their values. The name of each quantile is
shown in the same color as the corresponding interval on the histogram. Double-clicking a quantile name
in the list opens the Quantille Properties dialog box, allowing the user to edit the quantile.
Quantille Value – quantile (right boundary of the interval) value in percent (this field is active only if
the Manual Set box is checked in the Histogram Range Colors window).
The Print Multiple Crossplots dialog box consists of the parameter setting area on the left, and the
area where you can preview and interactively edit the canvas (which is a combination of objects –
crossplots, histograms, legends and titles) on the right.
The top left corner of the dialog box contains the list of objects that have been added to the canvas and
will be sent to the printer. Objects can overlap, so the order in which they are listed determines the
order in which they will appear when printed (the first object in the list will be printed over all other
objects etc.). You can change the order of objects using the up/down arrows to the right of the list.
After adding objects to the canvas and setting up their parameters, position and size (see below for a
detailed description of how to do this), you can print the canvas by pressing the Print button. The
information on the currently selected printer and its settings will be displayed in the lower right part of
the dialog box:
Press Printer Setup button to select a printer and set up its parameters.
You can save the current canvas state as the default collection canvas by pressing the Save button. This
way, when you close the canvas and then press the Canvas button in the CrossPlot Manager window
once again, the saved canvas will open.
If you need several different canvases, you can save the current canvas as a file on the disk (the Save as
button) and later load it from that file (the Load button).
The object size and position on the canvas specified in this dialog box can later be changed interactively
in the preview area.
A legend is a list of polygons within the active crossplot or intervals within the active histogram which
define the colors of points on all other crossplots. Press the Add Legend button to add a legend to the
list of objects displayed in the preview area. The following dialog box will appear:
• Left X – X coordinate of the title top left corner in mm.
• Top Y – Y coordinate of the title top left corner in mm.
• Width – title width in mm.
• Height – title height in mm.
• Transparent – enable/disable title background transparency.
• Angle – title rotation angle in degrees in the 0-90 range.
• Font – title font setup.
• Save as – save the canvas parameters to a text file with the specified name.
• Save – save the changes in the canvas parameters to a text file with the default name.
• Load – load canvas parameters from a text file.
The object size and position on the canvas specified in this dialog box can later be changed interactively
in the preview area.
• Selecting active objects. Select an object by left-clicking on it while holding down the Ctrl key.
In places where several objects overlap each other the topmost object will be selected (in
accordance with the order in the list of displayed objects). You can also select an object by
leftclicking on its name in the list (in the left part of the Print Multiple Crossplots dialog box).
• Moving objects around the canvas. Place the mouse cursor over the selected object you want
to move. The cursor will change its appearance to . Then grab the object with the left mouse
button, move it to the desired location, and release the mouse button.
• Resizing objects. Place the mouse cursor over the lower right corner of the selected object (the
selected object will have that corner marked with a red square), press the left mouse button, move
the cursor to the new location while holding the mouse button down, and release the mouse
button. The object corner will move with the cursor, and the object size will change accordingly.
• Changing object parameters. To open the object parameter setup dialog box, double-click the
object with the left mouse button.
When objects are moved or resized on the canvas, the lines showing sheet layout and image printing
borders on the canvas are automatically redrawn.
Header QC
This module is designed for header quality control.
The module allows verifying the output header values within the specified range and checking the
variation of the increment between the adjacent traces.
Procedure
1. Select the header to be checked
2. Set the check parameters – range and increment
3. Select the error display method
Parameters
Header selection field – select the header to check from the drop-down list.
• Check range – check whether the header values fall within the specified range.
▪ Min – minimum value of the range.
▪ Max – maximum value of the range.
• Check increment – check the increment between consecutive traces.
▪ Min – minimum allowed increment.
▪ Max – maximum allowed increment.
• Error output mode – error output parameters.
▪ Write to file – save trace headers that do not meet the specified ranges to a file.
▪ Matching headers to output – select additional headers to be included in the output
file or displayed in the error message for trace identification purposes. The headers
must be separated by commas.
▪ Error message – show a dialog box with the errors.
Stack QC
This module allows evaluating the signal-to-noise ratio within the specified window.
The module evaluates the cross-correlation degree either trace by trace or between the current trace
and the mean trace within the window.
Parameters
• Start time, End time (ms) – start and end of the time window within which the noise content
will be evaluated.
• Min/Max frequency – minimum and maximum frequencies in the range used to determine the
signal-to-noise ratio.
• Max shift – maximum shift between the traces within which the maximum correlation function
will be determined.
• Window length – trace window length used for calculation of the mean trace if correlation
with the mean trace in the window is performed.
• S/N ratio in header – header where the resulting signal-to-noise ratio values will be saved.
• Mode – application mode
▪ Trace by trace correlation – cross-correlation functions (CCF) are calculated between
the adjacent traces and auto-correlation function (ACF) is determined for each trace
within the ensemble, after which the mean CCF and ACF are found. It is assumed that
the mean CCF is the ACF of the signal, and the mean ACF is the ACF of the signal +
noise. The mean CCF spectrum SCCF(f) and mean ACF spectrum SACF(f) are
calculated, and the signal-to-noise ratio is determined using the following formula:
𝑓𝑚𝑎𝑥
𝑆⁄ = ∑ 𝑆𝐶𝐶𝐹 (𝑓)
𝑁 𝑆𝐴𝐶𝐹 (𝑓) − 𝑆𝐶𝐶𝐹 (𝑓)
𝑓=𝑓𝑚𝑖𝑛
▪ Correlation with mean trace in window – correlations are calculated between pairs of
traces each consisting of an ensemble trace and the model trace obtained by averaging
all traces in the window.
Wavelet Processor
The Wavelet Processor is an interactive environment for visualization and editing of signals, analysis
of their amplitude and frequency characteristics, import and analysis of existing filters, as well as
building various forming filters. The Wavelet Processor can be used for such procedures as construction
of a debubble filter based on a given source signature, construction of a zero-phase filter, bringing the
signal characteristics of devices to a general form (for example, geophones to hydrophones), check of
the operation of filters provided by the contractor, etc.
The signals and filters built can be exported and applied for subsequent processing in the production
mode.
NOTE: the module is of the stand alone type and must be alone in the flow.
Module parameters
Before you start the work, set the schema where the visualization parameters are stored.
Scheme path– specify the location in the project database where the visualization sheme is stored. This
scheme contains the sizes and positions of all windows are saved, as well as the data display parameters.
The scheme is automatically saved when the main window is closed, and it is loaded when the window
is opened.
After starting the flow with the module, the start window opens:
The main window for working with the Wavelet Processor module
The files containing the signal can be downloaded using the icon , or via File-> Open tab. In
addition to downloading signals, the option to generate a Ricker wavelet is available using the File->
Generate tab.
• Sample Interval;
• Number of Samples – number of samples in the trace;
• Pick frequency – frequency that corresponds to the principal maximum in spectrum.
Number – the number of the trace in the dataset that will be imported into the Wavelet Processor
module.
The program accepts at the input the text files in the following formats:
1) Sig files (.txt и .sig) – text files in the standard format of the Gundalf software.
2) Echos files (.txt) – a text file in the Echos (Paradigm) software format.
NOTE: if you have a text file with a signal, the samples of which are recorded in one column
without headers, you can upload it to the project using the Load Text Trace module, and then use
the import option from the dataset.
After selecting the file with the signal, a window will appear in the workspace:
The window for working with the signal. It is possible to download several files, the signal from each will be
presented in a separate window.
The window for working with the signal
File tab
Open – import of a signal from a text file or dataset from the project database. You can add several
different signals to one working window.
Export – export of the signal to a text file or dataset to the project database.
Save image – save the image in one of the specified formats to the specified directory on the hard disk.
You can also set the length and width of the exported image using Image width and Image height
parameters, as well as the resolution using Resolution, dpi parameter.
The options window for saving the image
Edit tab
Copy image to clipboard – save an image with a working window to the clipboard.
Processing tab
1) Wavelet/Scalar
• Add Scalar – add a scalar value to each signal sample.
• Divide Scalar by sample – divide the given scalar by the value of the signal sample.
• Multiply by scalar – multiply each signal sample by a given scalar.
• Scalar minus sample – subtract the scalar value from the sample value.
2) Static Correction – static signal shift:
• Apply Statics – statically shift the signal by the specified value in milliseconds.
3) Muting – muting the signal:
• Bottom muting – the signal muting is below the specified time.
• Top muting – the signal muting exceeds the specified time.
4) Filtering – frequency filtering of the signal:
• Butterworth filter – Butterworth filter. The description of the module parameters can be
found in Butterworth Filtering module.
• Orsmby bandpass filtering – Orsmby filter. The description of the module parameters
can be found in the Bandpass Filtering module.
• Simple bandpass filtering – trapezoidal band pass. The description of the module
parameters can be found in the Bandpass Filtering module.
5) Wavelet Editing – signal editing:
• Length – change the trace length.
• Normalize – trace amplitude normalization
• Resample – resampling of the trace with the specified step.
• Reverse – reversal of the signal in time.
6) Spectrum – working with the signal spectrum:
• Min-phase equivalent – bringing of the signal to the minimum-phase form.
• Zero-phase equivalent – bringing of the signal to a zero-phase form.
7) Deconvolution – deconvolution of the signal:
• Predictive Deconvolution – predictive deconvolution.
View tab
Using this tab, you can select the elements that will be displayed in the window:
NOTE: you can also continue working with the processed signal in a separate window. To do
this, drag the stage from the procedure history to the workspace with the left mouse button
pressed.
When pressing a right mouse button on the applied procedure, a dialog box appears:
Show in this window – open the result of the procedure execution inside the current window.
Show in new window – open the result of the procedure execution inside a new window.
Details – open a dialog box with information about the parameters of the executed procedure.
Delete processing below – cancel all subsequent procedures after the current one.
4) Samples Table – a table of samples in Time-Amplitude format. In manual mode, it is possible
to change the signature samples. To do this, double-click on the amplitude field:
Changing the signal samples in a manual mode
You can also change several samples at the same time. To do this, select a range of samples,
double-click on one of the samples within the selected range and change its value:
To change several samples at the same time, select a range of samples, double-click on one of the samples within the selected
range, change its value and press Enter.
5) Headers Table - display table with signal headers. Double click to change the headers except
for dt and NUMSMP:
6) Wavelets list – display a list of downloaded signals. Check the box for the signals that you want
to visualize in the working window. You can delete the signal from the window by pressing
Delete button.
Zoom tab
Using this tab, you can set how the working window will be scaled.
Tool bar
– zoom in/out.
You can also compare the amplitude and phase spectra of signals. To do this, drag several pulses into
one working window.
Comparison of the amplitude and phase spectra of the original signal and the signal after eliminating the
repeated bubble pulsation
After selecting one of these options, a window will appear where you need to drag the signals with which
the mathematical operation will be performed:
Dual Wavelet Processing window. Import two signals into the window, where after a dialog box appears to
configure the procedure parameters. You can drag the signals from other windows, or import them using File-
>Open tab.
We will demonstrate the use of Dual Wavelet Processing on the example of building a filter to remove
repeated bubble pulsations from the data.
Step 1. Loading the pulse into the working window.
Using the File tab, select Open-> Text File… and upload the file containing the signal:
After selecting a text file on the hard disk, a window for working with the signal will appear:
For the sake of simplicity we will eliminate the repeated pulsation using the bottom muting. For this, the
muting time is 100 ms, and the tapering is 20 ms. The result of the procedure is shown below:
With the help of bottom muting, the repeated pulsation was removed from the signal.
NOTE: you can also suppress the repeated pulsation by using predictive deconvolution.
As a result, a window will appear that asks you to add two signatures to build a cast filter:
The window for calculating the cast filter
NOTE: first, we drag the signal WHICH will be brought to the specified form into the window, and
then the signal TO WHICH it will be brought. In our case, we want to bring a signal in which there is a
repeated pulsation to a signal without repeated pulsations.
First, drag the signal containing the repeated pulsation into the Match Filter window with the LMB
pressed, and then the signal without the repeated pulsation.
After clicking OK, a cast filter will appear in the Match Filter window:
In order to make sure that the built filter actually supresses the repeated pulsation, we will convolve it
with a signal in which there is a repeated pulsation.
Next, drag the match filter built in Step 3 and the signal containing the repeated pulsation into the
window:
Drag the cast filter and the repeated pulsation signal into the Convolution window.
We can see that there is no repeated pulsation in the result obtained. The calculated filter can be used to
suppress the repeated pulsation in the data using Custom Impulse Trace Transforms module.
Example 2
We will demonstrate the use of Dual Wavelet Processing on the example of building a filter to bring
the data to a zero-phase form.
We bring the imported signal to the minimum-phase form using Processing-> Spectrum - > Zero-
phase equivalent
Signal zero-phasing.
Result of zero-phasing:
Drag the imported signal into the Match Filter window using the mouse, and then the zero-phase signal.
Then you need to configure the parameters of the match filter. Note that the zero-phase signal is located
in both positive and negative times. In this regard, the window for building the filter should capture both
the positive and negative parts of the zero-phase signal:
After clicking OK, the result of building the filter will appear in the window:
The cast filter built
NOTE: using the Convolution window, you can check that the built filter actually brings the signal to
a zero-phase form.
The convolution result of the original signal and the built zero-phase filter.
Set the save parameters in the dialog box that appears and specify the location in the database where
the filter is saved:
Using the Custom Impulse Trace Transform module, you can apply the calculated filter to the data.
To do this, in the Dataset field, select the filter saved to the database, specify the filter zero and set the
convolution mode to Amplitude spectra – Multiply, Phase spectra – Add.
ЗС Processing (multi-component processing)
Asymptotic CCP Binning
Asymptotic Common Conversion Point Binning module is designed for performing asymptotic binning
of converted PS waves’ data. Header values, relating to the common conversion point (CCP) are filled
in the data when the module is run within the flow: CCP_X, CCP_Y and ССР.
Moreover, the module can be used for conventional CDP binning in 2D case.
When using the module for CDP binning, you should indicate the value Gamma = 1. Then, just after
this module in the flow place a Trace Header Math and write down the following expressions there:
CDP_X = [CCP_X]
CDP_Y = [CCP_Y]
CDP = [CCP]
Parameters
• Dataset field is intended for selection of traces’ set by the user for binning procedure. A window
for selection of dataset from the project database appears when you click the button Browse….
• Gamma=Vs/Vp field is intended for input of shear waves’ velocity to compressional waves’
velocity ratio, which is considered to be a binning parameter (in case you use the module for
CDP binning, you should specify 1).
• Bin to bin distance is the profiling parameter and corresponds to the distance between bins in
meters, calculated on binning profile.
• Strait line binning allows selecting the binning type: onedimensional or two-dimensional. In
the first case (when you have chosen Strait line binning) it is supposed that all traces for binning
have the same REC_Y and SOU_Y and the binning profile is specified by the segment [Start X,
End X]. Bin range parameter specifies here the bin width along the profile.
▪ Bin range
▪ [Start X, End X].
• Crooked line binning is on) the profile is specified by a crooked line, read from the file.
Browse… button of the section allows selecting the filename of the profile, while the button
Display crooked line – visualizing and editing a profile along with a crooked line of bin
profiling, receivers, sources, CDP and CCP. If the file has not been specified, a straight profile
will be generated, connecting the points of source and receiver of the first trace. Bin inline and
Bin xline parameters allow specifying linear dimensions of bins: bin width along the binning
curve and height across the curve, correspondingly.
To magnify the image you have to indicate on a corresponding bar the start coordinate of the fragment
to be magnified via the left mouse button (MB1), and holding left mouse button, drag the mouse pointer
to the end coordinate. The selected fragment will be magnified and scaled to fit the viewport.
To change the position of the existing node, grab it with the left mouse button (МВ1) (the cross will
change its color), drag it to a new position and release a button. The node will shift to the position under
the cursor. At the same time the bins will be redistributed.
To add a node to a curve you have to click a mouse button twice. A node will be added to the nearest
point of the curve.
To delete the node, click on it with the left mouse button (МВ1), holding simultaneously Ctrl button.
The status bar reflects: at the left side – metric coordinates, corresponding to the cursor position, at the
right side – prompting message on the colors of visualizing objects.
FairField Rotation
This module is designed for rotation of trace samples acquired using three-component bottom stations
from FairFieldNodal by an angle defined by means of a rotation matrix.
For the module to run correctly, the project must include the following headers:
COMP – component number header, CHANNEL_SET – channel number header.
Headers containing the rotation matrix elements: MATRIX_H1X, MATRIX_H1Y, MATRIX_H1Z,
MATRIX_H2X, MATRIX_H2Y, MATRIX_H2Z, MATRIX_VX, MATRIX_VY, MATRIX_VZ.
Module parameters
• Rotate XY to Source – rotate the XY maximum to source.
• Average the matrix coefficients for one receiver/one shot location – average the matrix
coefficients for each source/receiver pair.
Modeling
Hodograph
This module allows calculation of arrival time of direct wave or wave reflected from specified reflecting
boundary on the assumption of horizontal layered medium. The medium model should be specified as a
text file. The depth of reflecting boundary should be specified separately.
The seismograms running in the flow with filled in fields of source and receiver coordinates are input
into the module. In the module the wave arrival time, the angle of incident wave at which it departs from
the source and the angle at which the reflected wave comes to the receiver are calculated for every trace
(for source-receiver position indicated in its headers). These values are recorded into indicated trace
header fields.
Parameters
The module calculates the rime of wave arrival from specified reflecting boundary, the angle of incident
wave at which it departs from the source and the angle at which the reflected wave comes to the receiver.
These values are recorded into indicated trace header fields. The header can be selected in the Model
group of parameters:
• Load from file. Horizontal layered model must be specified in the text file. To select the file
click the Browse... Besides, the file name can be entered manually. You can read about format
of at the end of module description.
• Time – from the list, select the header field where the time of wave arrival will be recorded;
• Source Angle – from the list, select the header field where the angle of incident wave at which
it departs from the source will be recorded;
• Receiver Angle – from the list, select the header field where the angle at which the reflected
wave comes to the receiver will be recorded.
Wave type. When the file with the model is specified, select the type of the wave for which the
hodograph will be calculated. To do this, use parameters available in the module dialog box:
• Direct wave
▪ P-dir – direct P-wave;
▪ S-dir – direct S-wave;
▪ Other–When this option is activated the type of the wave is taken from the text file
describing the lay model.
• Reflected Wave
▪ PP-ref - PP-type wave (both the incident wave and reflected waves pass through all layers
as the P-wave);
▪ PS-ref - PS-type wave (the incident wave passes through all layers just as the P-wave but
the reflected wave passes through all layers as the S-wave);
▪ SP-ref - SP-type wave (the incident wave passes through all layers just as the S-wave but
the reflected wave passes through all layers as the P-wave);
▪ SS-ref - SS-type wave (both the incident wave and reflected waves pass through all layers
as the S-wave);
▪ Other. When this option is selected the types distribution in layers of the model is taken
from the text file describing the lay model.
Reflection depth. For reflected wave, specify the depth of the boundary (in meters). The time arrival of
reflected from this boundary waves will be calculated. The depth of reflecting boundary should be
specified in the Reflection depth field. It can be specified by one of the two methods:
• Custom depth. You can set the reflection depth manually for all source- receiver pairs at the
same time.
• Load from header. Activate option and set the reflection depth previously saved in the headers
field. To do this, select the desired header from the list.
The file must be a text one and contain the values presented as a table with blanks or tabs as separators.
The first line should be the line that starts with ~A symbols and then contain the names of the columns
available in the file:
After that, for every layer the respective values should be specified in the order similar to columns
headers position. The depths are specified in meters, velocities - in km/s.
The WaveType column allows the user to assign the type of wave individually for every layer of the
model. The module uses the values from this column only if in the module parameters dialog box the
type of the wave is specified as Other. Otherwise, if the type of the wave explicitly indicated in the
module parameters it is considered as same for all layers and the values in the WaveType columns are
disregarded.
When the file with the model is specified, select the type of the wave for which the hodograph will be
calculated. To do this, use the Wave type group of parameters available in the module dialog box.
Here, activate either Direct wave option for direct wave or Reflected Wave option for reflected wave.
Add White Noise
This module allows adding white noise to the data.
Parameters
• White noise level – white noise level in percent of the maximum trace amplitude.
• Absolute value – if this box is checked, absolute rather than relative values will be added to
the trace amplitudes within the specified range.
Add Event
This module simulates a linear seismic event and adds the results to the data.
Parameters
• First trace, Last trace – event boundaries defined as trace numbers relative to the current
sorting.
• Sample number of first trace – number of the first sample in the event.
• Dip (samples/chan) – linear event dip (number of samples by which the traces are shifted).
• Value – event amplitude.
2D FiniteDifference Modeling
The module is designed for full-waveform seismic modeling of wave propogation in a medium. For this
purpose medium is divided into cells with the specified characteristics (Vp, Vs, Rho), receivers and
sources geometry is specified in respect of which the wavefield calculation takes place.
Input data:
Output data:
1. Source geometry file and Receivers geometry file. These are sources and receivers geometry
files. Their format is as follows:
X Z // X coordinate is always in the first column (lateral distance), and Z coordinate is in the second column (depth). First line of the file is non readable.
0 2
2 4
4 1
……………..
20 0
X coordinate is entered to the first column, Z coordinate is entered to the second column. Values are
separated from each other by tabulation or space.
2. Media table file — files containing velocity and density values for each type of rock used in a grid
model.
Each grid cell into which the medium is divided is characterized by compressional wave velocity
(Vp), shear wave velocity (Vs) and density. The module can work with no more than 256 rock
types (256 different variations of Vp and Vs parameters as well as density). The file is of the
following form:
Index Vp [m/s] Vs [m/s] rho [kg/m^3]
……
Thus, one type of rock corresponds to one line of this table and this rock type is characterized by a
unique numeric index (Index) ranging from 0 to 255.
3. Media matrix file — this file contains grid configuration (X and Z coordinates of the grid nodes),
as well as values of index (Index) characterizing rock type in this cell. Standard file format is .grd:
ATTENTION!
Wave front starts to propogate from each source simultaneously. Thus, there is propogating waves
interference. Receivers record seismic events over the time specified by the user.
Medium grid model. The figure shows locations of receivers and sources as well as indexes that characterize the
part of a medium within a single cell. X axis is a distance along the profile, Z axis is a depth. Note that
receivers and sources should be located at a distance of 1–2 cells from the edges of the grid specified in the
Media matrix file (the region is highlighted in green).
2D Finite Difference Modeling module operation window. Panel 1 is in charge of necessary input files loading:
location of sources and receivers as well as configuration and properties of the medium where modeling takes
place. Panel 2 parameters set time step for wavefield calculation, number of steps and registration time. To
change the grid step, activate "Override grid cell size" parameter and specify a coefficient by which the current
step between grid nodes (cell size) will be multiplied. Panel 3 is in charge of creating snapshots (wavefield
images at the specific time). Here you can specify the range of time steps in respect of which snapshots will be
saved.
Parameters:
ATTENTION! For a detailed description of the input files, see "Requirements for Input Files"
To change the grid step between grid nodes activate "Override grid cell size field. In this case, it will
be possible to set "New cell size" coefficient which shows by which factor the distance between two
adjacent nodes changes. Grid cell size reduction makes it possible to reduce the approximation error at
each time step.
Save snapshots of the wavefield.To obtain wavefield images at specified time moments, activate
Save snapshots of the wavefield parameter:
• From step— time step number starting with which snapshots are to be saved.
• To step— time step number after which there is no snapshots saving.
• Save each— step between time steps in respect of which snapshots are saved.
• Path to save snapshots— path to saved snapshots.
Data manipultaion
Data Filter
This module allows the user to sample traces by definite headers in the flow.
Parameters:
• No filter – sample traces are left unchanged.
• Match selection – only traces that meet the specified sample will be added into the flow.
• Do not match selection – only traces that are not specified in a given sample will be added to
the flow
SYNTAX:
The second line – sample (see picture). The sample syntax corresponds to the syntax used in the Trace
input module. The module supports the use of the replica system (see section Replica system).
Take values from file - loading a list of particular traces from a text file. In this case, the edit field of
the module shall contain a string with a list of trace headers defining how the columns of the file are to
be interpreted. The delimiter in a text file can be either a tab character or a space.
Add zero trace
The module allows the user to add an empty trace to the flow. The module can be used both as input
module in the flow (i.e. not requiring modules such as the Trace Input, Seg-y Input, etc.) and as an
extension to the existing data. In the second case, the empty trace will be added to the beginning or end
of the seismograms, depending on the location of the flow.
Parameters
• Number of traces – the number of added traces.
• Number of samples – the number of sample traces.
• Sampling rate – the sampling frequency of traces.
Resort
Module resorts the traces from the current dataset to a specific one. While working with large data
amount, the user should sort them in correct order in advance, because direct sorting in the Trace Input
module is very time-consuming.
Parameters:
• Input dataset – the input dataset with the initial sorting.
• Output dataset – the output dataset with the specified sorting.
• Primary sort field – the primary header field of sorting.
• Secondary sort field – secondary header field of sorting.
• Memory buffer size – the buffer size in MB which will be used during the sorting. Choosing
of the buffer size should be based on the available RAM. With 32-bit software version, it
should not exceed 1024 MB.
Comments
The module allows the user to add comments to the flow. The module does not affect the flow.
Auto Picking
First Breaks Picking
This module is designed for automatic picking of the times and amplitudes of first breaks in the set time
interval.
Traces which headers contain the time point from which the first breaks times will be searched are
delivered to the module input.
As a result of module operation, two header fields are assigned: with first break times and with
amplitudes relative to these times.
1) Treshold: the algorithm finds the first sample which amplitude exceeds the Treshold
parameter. From that time, the search for the first characteristic point of specified Type
(minimum, maximum, zero-crossing, threshold value) corresponding to the first break starts.
2) Global: search for absolute minimum or maximum, depending on the selected parameter Type,
which corresponds to the first break, in the set window.
3) Derivative: first break corresponds to maximum energy rising in the window.
4) Modified Coppens’s method (MCM): is the method of calculation of first breaks based on the
analysis of relations of energy in the windows of different length. It was described in article [1].
Here is its brief description:
Step 1. Trace energy is calculated in two windows: Longer Window (E1(t)) and Leading
Window(E2(t)).
Demonstration of ER(t) function calculation: Right borders of Longer window and Leading
window match and consistently move to the right for each new value of ER(t) function. The
width of Longer Window increases (the left border is fixed and corresponds to zero sample)
while the Leading Window simply moves rightwards consistently without changing its width.
The recommended length of the Leading Window is equal to the period of the first break.
Step 2. Calculation of function ER(t)= E1(t)/(E2(t)+β), where β is a constant stabilizing the
function value.
Step 3. ER(t) function is smoothed by Edge Preserving Smoothing (EPS) filter (the
operating principle is provided at the end of module description), and its derivative is
calculated
Step 4. The picking point is set on the sample where the derivative has the maximum value.
1) Let us assume that the filter length is equal to five samples. Consider five windows for
sample si:
3) Choose the window with the least variance and define the mean value of samples in that
window.
5) Turn to the next sample and perform all the specified steps for it
Module operation window
Module parameters
Output headers:
• First Break time is the header containing the times of first breaks which will be filled in
during module operation.
• First Break amplitude is the header containing the values of amplitudes corresponding to first
breaks picking.
• Horizon (header word) is the header containing the time value from which first break times
will be searched for.
• Window length is the length of a time window (ms) in which first break times will be searched
for. The window center for each trace corresponds to picking Horizon (header word).
Demonstration of the choice of the pick corresponding to the first break when the value of the
parameter Maximum/Global Maximum ratio is 30%
References:
1) Sabbione, J.I. & Velis, D. 2010. Automatic first-breaks picking: New strategies and algorithms.
Geophysics, 75: 67–76.
3D auto picker
The module allows extending picking of one of the horizons to the entire three-dimensional seismic data
cube. The initial horizon picking set for a certain number of traces is applied to the module input.
The module falls into a stand-alone group, i.e. it should be alone in the flow and does not require input
and output modules.
Parameters
• Processed dataset – a dataset where it is necessary to trace the horizon.
• Seed horizon – the path to the initial horizon picking in the database.
• Output horizon – the path to the picking resulting from module operation.
• You can also set the initial picking using headers specifying the necessary header in Seed
header field. In this case, the result of module work will be recorded to Output header.
• Window – a time window in which a correlation between adjacent traces is calculated.
• Allowable shift – maximum possible shift between points of adjacent traces.
• Stop threshold – threshold below which traces are assumed not to be correlated. This
parameter is indicated in percentage terms, it can be fractional. Parameter values range from 0
to 100 exclusive of the limits.
• Load aperture – a technical parameter, length of trace part of the input Dataset, which is read
into memory when the module is running.
ATTENTION: Traces without the selected horizon as well as traces with high noise level or low
signal amplitude will have no picking.
Marine
Zero-Offset DeMultiple
The module is deigned for demultiple of near-offset single-channel or stacked seismic data. The
algorithm is based on adaptive subtraction of a model of multiples from the original wave field. The
model is obtained from the data itself, either by static shift of the original traces of by autoconvolution.
The adaptive subtraction algorithm used here is the same as implemented in the Wave Field Subtraction
module and is discussed in more detail in the part of the Manual dedicated to that module.
In general terms, a special filter is calculated for each trace basing on both the original data traces and
the model traces. This filter, when applied to the trace is trying to minimize the RMS amplitudes of
whatever is found similar between the trace and the model. When the filter is calculated, it is accounted
for non-stationarity to make it adaptive to the events that are quite similar but not exactly the same. This
makes the subtraction to a certain extend efficient even for rather approximate models, when the arrival
time or/and the amplitude of the modeled multiple differ from the actual observation. However, the more
similar is the model of multiples to the real multiples observed, the more efficient is the subtraction. For
this reason, the shorter is the source-receiver offset, the better is the result that can be obtained, because
the most accurate multiple modeling can be easily made on zero-offset data. For the same reason, the
module is less efficient if the data is significantly disturbed by sea swelling because the primaries and
the multiples are disturbed differently and the resulting model becomes less similar to the observation.
When the filter is calculated for a particular trace of the original data, beside the corresponding trace of
the model some adjacent traces of the model can also be used. It might happen that some extra similarity
to what is observed on the original data trace can be found there. Thus, using adjacent traces could make
the subtraction effect stronger. On the other hand, the filter shall not differ too much from trace to trace.
For this reason, filters calculated for each of the original traces can afterwards be averaged over the base
of several traces. This would make the subtraction milder and help to avoid the effect of erasing a gap
around the removed multiple.
Apply top-muting before modeling – when the option is checked the data will be top-muted before the
model is generated. You can select either Horizon from pick or Horizon from header to define the
muting. Here the Tapering window length above the horizon defines a window in % above the horizon
where the amplitudes will gradually fade out down to 0.
Processing windows — the processing can be made independently is separate windows. Then, an
individual subtracting filter is calculated for each window. This may lead to a more accurate result. You
might wish to make a separate window for each strong seafloor multiple observed.
The windows are defined by horizons or headers – each pick is considered to be a boundary between a
window above it and another window below it. Thus, when no picks are specified in the list here the
whole data range is considered as one window. One pick defines 2 windows (above and below), 2 picks
define 3 windows (above both, in between of them, and below both), etc.
Using the buttons above the list of currently selected picks you can Add picks to the list and Remove
them. You shall make sure that the picks added to the list do not criss-cross, otherwise the module
behaves predictively. It is not recommended to make the windows too narrow, otherwise it may lead to
erasing everything there.
When several processing windows are used, the Filter length and White noise level (as well as the Band
transform parameters discussed below) are to be indicated for each window individually. If you wish to
exclude some window from the processing, set its filter length to 0.
Tapering length parameter sets an area of results’ stitching in different windows. It is measured in
samples.
Use adjacent traces – this group of parameters regulate use of adjacent traces of the model for filter
calculation as well as averaging of the resulting filters over the base of several traces.
• Number of traces – specify the number of adjacent traces of the model taken on each side of the
current trace to be used for filter calculation. When this parameter is 0, the filter for the current
trace is calculated using the corresponding trace of the model only. When it is set to 1, the filter
is calculated basing on 3 traces of the model: the current traces and 1 extra adjacent trace each
side, etc. (For more details of the implementation refer to the Wave field subtraction module
description). Using more traces here would lead to stronger subtraction and longer processing
time. A good starting guess for this parameter is 3. Then you may try to increase the number try
and subtract more.
• Filter averaging base, [traces] – specify here how many adjacent traces the resulting filters will
be averaged over. Normally, relatively big values are good here. You can start with 25 and then
see if you would like to make subtraction stronger (smaller number) or milder (larger number).
• Filter calculation step, [traces] – you can calculate the filter not for every trace but at a specified
interval (must be less or equal to a half of the averaging base). The filters in between will be
interpolated. This can make the calculation significantly faster.
Subtraction parameters group define how the model will be subtracted from the data:
• Filter length, [samples] – the length of the adaptive filter that is used to form the result of the
subtraction, specified in samples. The filter is calculated from both the original trace and the
trace(s) of the model and is aimed to find similarities and minimize their RMS amplitudes at the
resulting trace. Typically, the filter shall be longer than the wavelet length. Normally until a
certain extend the longer is the filter the stronger is the subtraction effect and the more time is
required for the processing.
• White noise level – regularization parameter, the larger is the value the more subtle is the
subtraction effect. More details are provided in the Wave Field Subtraction module description.
Model shifts – shift the model to the maximum cross correlation between the model trace and the data
trace before any other calculations. This is aimed to improve the efficiency of the adaptive subtraction
in case of strong sea swelling.
• Shift model – set Yes (1) if it is necessary to shift the multiples model or No (0) if it is necessary
to leave it without shifting.
• Max shift up, [samples] – the maximum value by which the trace can be shifted up.
• Max shift down, [samples] – the maximum value by which the trace can be shifted down.
Use band transform — this option limits the bandwidth of the operation, saving processing time and
providing additional frequency filtering at the same time. The Low frequency and High frequency
parameters set the lower and upper limits of the used frequency range, respectively.
Don't substract output model – when this flag is checked, no subtraction of the model is performed.
Instead, the module merely outputs the model of multiples prepared for subtraction. Often it is
crucially important to see the model in order to understand the subtraction result (especially, when it
happens to be not as good as was expected).
The Tides Import is a stand-alone module. Its parameter dialog window consists of several tabs:
Dataset – click the ... button to the right of the string to select a dataset where the tidal corrections
are to be imported to.
Tides Tab
877
On this tab select a text file with the sea level altitudes here and define its format.
Definition of Field
Positions/columns from which the corresponding values will be loaded are indicated.
Time positions (hour, minute, second) and the column with tide values to be imported (VALUE) are
to be indicated.
Date. In the field Date specify the date of the first record in file (since only hours, minutes and
seconds are read from the file)
The File button allows selecting the file with the information to be loaded.
Use Edit File Layout button to specify the structure of the tides file.
Headers Tab
878
Marine geometry input
This module is designed for input of positioning information (geometry) into marine data. The
module is stand-alone, that is it must be the only module in the flow, no additional input/output is
required. The module parameter dialog is shown below:
Click the … button to select a dataset in the project database where calculated geometry will be
assigned. The path to the file in the database will be displayed in the Dataset line:
879
CDP Х-coordinate, CDP_Y – CDP Y-coordinate, CDP – CDP number, SOU_L – cumulative
distance from the profile start.
The module essentially enables two possibilities of geometry calculation and input into the data –
with the use of the Real ship coordinates from GPS, or artificial “Dummy” coordinates calculation
along the line basing solely on the source-receiver geometry layout. In both cases, specification of
correct source-receiver geometry configuration is required.
One of the key moments concerning geometry input is selection of the way how positioning
information will be matched to the traces in the dataset: this can be either by time (Time match) or
by single trace header, typically, field record number – FFID (Header field match).
For geometry input, it is essential that the matching headers values in the dataset coincide with with
those in the coordinate file.
Let us consider the process of geometry input using the navigation file in more detail.
Check the Real ship coordinates option to calculate positioning using real ship coordinated. If the
coordiantes are linked to particular field record numbers, in the Select matching box select Header
field match and specify a header that will be used for matching (typically, FFID).
880
In the following example, the coordiantes will be matched to traces by time, so in the Select matching
box we select Time match option. When it is activated, the coordinates will be matched to traces
basing on the YEAR, DAY, HOUR, MINUTE and SECOND headers. The DAY header shall contain
Julian day – the ordinal number of the day of the year. Specify the date corresponding to the first line
of the file with the coordinates using Select date and/or Julian day fields.
Now click the Ship Navigation button to select a file with GPS-coordinates and specify the
navigation layout, i.e. the exact way the coordinated will be read and interpreted:
Here, specify a Coordinate system of the GPS coordinates that can be either in the UTM
(UTM_X/UTM_Y) or geographic (Lon/Lat) file.
Depending on the selected coordinate system, you will need to indicate coordinates in the UTM_X,
UTM_Y format or in the latitude-longitude layout, respectively.
To select a navigation file, click the Select file buton. When a file is opened, its contents will be
displayed in the dialog window:
881
In this example, the geographic coordinates are used specified as degrees-minutes-minutes decimals.
As it has been stated above, in this case we should indicate the coordinate system type – geographic,
by selecting the Lat/Lon option in the Coordinate system field.
Then, match the value columns of the selected file with the fields of the Definition of field table. As
the coordinates here are to be matched by time, these fields are hour, minute, second, and coordinates
in degrees, minutes and seconds.
Columns of file can be interpreted either as space Delimited or Fixed width. As in the uploaded file
the time values (hours-minutes-seconds) are not separated from each other by, the Columns-Fixed
width option should be used. This will let us explicitly indicate the positions to be used for reading
the values of time and coordinates.
To make this, click the left mouse button on the first line in the Definition of Field – Time-Hour
and match it with the column indicating the hour (22) by pressing the left mouse button at the
beginning of the column and dragging it to the position where the hour value ends (two digits only).
The selected columns will be marked by colour.
882
After the column has been selected by mouse, press Set pos (set position) button to fix its position.
After that, for each line of the file the value indicated in the coloured column will be interpreted as
an hour .
Specify the positions of minutes and seconds in the file the same way.
After the time has been identified, specify the coordinate positions in the file. As the Lat/Lon
coordinate system is selected, we need to specify columns for degrees, minutes and seconds. When
loading, the software will recalculate them into UTM coordinates. Fill UTM Zone number to specify
a particular UTM zone for the recalculation. If you keep it zero, the most appropriate UTM zone will
be calculated automatically.
Geographic coordinates may be defiend as degrees with decimas, degrees-minutes with decimals or
degrees-minutes-seconds with decimals, minutes-seconds with decimals, etc. Any combination is
allowed: if any of the fields is not used, select it in the Defienition of Field table and click the Field
switch off button.
883
In this example, the coordinates are represented as degrees-minutes- minute decimals. So, both for
LAT and LON we will need to switch seonds off. Select the LAT-Sec and click the Field switch off
button, the field will be marked with -1. Repeat the same with LON-Sec.
Now, we need to indicate positions of degrees and minutes with decimals in the same way as it was
done for the time. After each column position is selected correctly, fix it by clicking Set pos button.
If you file has any textual header or footer you canuse Lines parameters to specify a limited range of
lines, from which the file values will be read. As the first line of the indicated file contains a header,
we start reading the values from the second line:
After all the field values have been assigned (to check this, click each field in the table in turns – the
correct file column should be highlighted), click the OK button, and the module will return to the
main menu.
IMPORTANT: When the coordinates are matched to traces by time, the time in the header fields
(read by default from the field Seg-Y or Seg-D files), should coincide with the time recorded in the
GPS file. Otherwise, the coordinates will not be assigned correctly!
IMPORTANT: When the coordiantes are matched by time, DAY header field of the dataset must
contain the same Julian day number as specified in the Julian day field of the module dialog. Keep
884
into account the date increment at 00:00. It the DAY header does not reflect this increment correctly,
the coordinates will be assigned to a part of the traces only.
The picture below shows the important moments described above – necessary matches of time in the
dataset headers and the file with the coordinates as well as with the Julian day of year.
If the GPS file with coordinates was not processed, you can smooth a navigation track using the
Coordinate Smooth option, selecting the averaging base (number of points – Window length) and
the Rejection percent.
885
After the ship navigation has been loaded, to calculate the source-receiver geometry coordinates,
specify its configuration in the Source/Streamer geometry tab:
The upper part of the tab contains a schematically shown source-receiver geometry tied to the local
coordinate system, with the 0 point at the GPS antenna. The negative direction Y is represented by
the ship’s heading, and the X direction is vertical to it. Negative X direction is port-side, while on its
starboard side lies the positive one.
IMPORTANT: In the schematic picture of the tab, the source and receiver are placed at different
boards for visualization; really, the X-axis offset may either not differ in sign, or equal zero ( = the
source-receiver geometry is placed strictly behind the ship).
Set up the source-receiver geometry parameters according to the available information concerning
the location of the source and receivers against the GPS position (when switching to any of the fields
of distances specification, a corresponding distance is highlighted in the picture):
• Straight line – the streamer is always located straight behind the ship in the line of the
ship’s course;
• Follow ship track – the streamer lays in the ship’s course, which generally is not dead ahead.
Heading Calculation. In both cases, the ship’s course is calculated by the navigation in virtue of the
Heading Calculation field with the indicated base.
886
Receiver geometry:
• First receiver dx (m) – X-axis distance (m) between the GPS position and the first receiver
in the streamer;
• First receiver dy (m) – Y-axis distance (m) between the GPS position and the first receiver
in the streamer;
• Number of receivers – the number of receivers in the streamer;
• Distance between receivers (m) – distance (m) between receivers in the streamer.
This information makes it possible to identify the source-receiver group coordinates fully and,
accordingly, to identify the CDP coordinates and CDP numbers necessary for further data processing.
Additional parameters that influence the coordinate calculation accuracy are as follows:
Geometry input without using the GPS file with ship coordinates
Another essential possibility is geometry calculation without using the GPS ship coordinates
(“dummy” geometry). In this case, input only the source-receiver geometry data. The calculated
coordinates of the source and receivers will be relative, i.e. not bound to real geographic coordinates.
However, such geometry input is a necessary and sufficient condition for the further data processing.
The mode is enabled by selecting the “Dummy” coordinates option in the Ship navigation tab; in
so doing, the items relating to the GPS file download will be unavailable. Instead, fill in the Shot
interval field to specify the distance between the source points in meters:
887
The Source/receiver geometry tab in the «Dummy geometry» mode:
888
In the given tab, it is only necessary to specify the number of channels in the receiver group, the
distance between the channels, the X- and Y-axes-based offsets of the source as well as the CDP bin
size.
The Dummy geometry mode implies that the first receiver geometry channel is placed at zero, i.e.
in the origin of the coordinate system. Thus, if the source was placed in front (in the current coordinate
system – on the left) of the receiver during the survey, set up the dy offset value as negative.
As the ship moves within the reference system different from the geometry setup coordinate system,
in such a case the calculated values of the receivers’ coordinates in the streamer will be negative,
until the streamer covers the distance of one its length in the ship’s course. A schematic layout of the
coordinate systems relative to each other and implicit when specifying geometry and ship’s course is
presented below (it precisely corresponds to the Dummy geometry mode). A blue colour stands for
the CS2 against which the ship moves, and a deep red colour stands for the of CS the geometry input
(displayed in the module); only positive directions are shown.
Y+
X+
X+ Y+
Also is worth noting that the CS of the ship movement identifies the headers intended for recording
the calculated values of the positions of the sources and receivers. In other words, blue X corresponds
to the REC_X, SOU_X headers; blue Y corresponds to the REC_Y, SOU_Y headers. It is clear that
the first receivers’ positions in the streamer will be negative until the streamer crosses the zero point
of the CS of the ship movement.
889
In general, in the Real ship coordinates mode, the relative position of the coordinate systems looks
similar. The differences will lie in the fact that the receiver geometry CS may generally be found in
any place with respect to the CS of the ship movement according to the obtained GPS coordinates.
Parameters
890
• Ship navigation – loading the file with ship navigation. After choosing this menu item, a
dialogue box will appear:
Having selected the file, the user should determine the columns which contain the information about
the time or FFID and the coordinates.
• In case of Columns-Delimited for each line in the field Definition of field, a corresponding
column in a file with geometry needs to be defined. For this purpose the wanted field (for
example Time-Hour) needs to be chosen: click with the left mouse button on the required
column and press Set pos. Current field will show the number of columns from where the
value will be obtained.
• In case of Columns-Fixed width for each line in the field Definition of field, it is necessary
to define the spectral band containing the value required. For this purpose, it is necessary to
select the first line in the option Definition of field with the left mouse button (while used for
correspondence with time, it will be Time-Hour (hour); while used for correspondence with
891
choosing header field, it will be the header field); then click the left mouse button within the
displayed file in the beginning of the column corresponding to the hour and release the left
mouse button where it ends. The selected range will be highlighted. To fix the range of the
current line in the Definition of fields press Set pos.
Field switch off – This option allows you to «switch off» any of the rows in the set Definition of
fields, which makes it easier to coordinate different record formats – degrees-minutes-seconds,
degrees-minutes-split minutes, etc.
Example:
If the coordinates are recorded in the format of degrees-split degrees, go to the field LAT / LON-Min,
LAT / LON-Sec and switch them off by clicking Field switch off. Now it is necessary to set only the
field LAT / LON-Deg to degrees-split degrees. So you do not need to recalculate the coordinates into
format such as degrees-minutes-seconds – the program will do it automatically.
Lines – it is used to choose the range of lines that value will be read from. This option allows for
skipping those lines with additional information recorded (for example, captions).
Coordinates system – setting the coordinate system in which the file with the coordinates is recorded.
The buttons Load / Save template lets you save and load configurations to assign values from the
navigation file.
By clicking on the OK button – the module will return to the previous MDI window and save all the
changes.
Select matching
Time match – the coordinates will be assigned based on the correspondence between the time
recorded in the dataset headers and the time recorded in the GPS file. The order number of the day of
year is also used when assigning coordinates and determined in the Julian day field.
892
Header match – the coordinates will be assigned based on the correspondence between any of the
header fields and the respective number of the file. More often than not, this field is the sequence
number of the shot (FFID) recorded in the source data headers and in the file with coordinates. If this
option is selected in the window Edit navigation layout – Definition of field, then, instead of the
time, Matching_field will be displayed, i. e. the field selected in the previous window.
Shot report – after the completion of the module work, it will display the Progress Report.
The Tab Source / Streamer geometry is used to define the geometry of the receivers-emitters
distribution.
893
Streamer shape – defines the form of the receiving streamer:
• Straight line – the streamer is always located in a straight line behind the ship in the
direction of a ship's course
• Follow ship track – the streamer is located in the ship’s course, which generally is not dead
ahead.
Heading Calculation. In both cases, the direction of a ship's course is calculated by navigation
from the Heading Calculation field with the base defined therein.
• First receiver dx (m) – distance in meters along the X-axis between the GPS position and
the first receiver in the streamer;
894
• First receiver dy (m) – distance in meters along the Y-axis between the GPS position and
the first receiver in the streamer;
• Number of receivers – the number of receivers in the streamer;
• Distance between receivers (m) – distance between receivers in the streamer.
• Source dx (m) – distance in meters along the X-axis between the GPS position and the
source;
• Source dy (m) – distance in meters along the Y-axis between the GPS position and the
source.
895
SharpSeis deghosing
SharpSeis Deghosting routine is dedicated for removing ghost wavefield from marine seismic data.
Provided algorithm can be applied for both 2D and 3D marine seismic datasets with any type of
source and does not require any additional information except the data itself. The SharpSeis
Deghosting module utilize a stabilized approximate recursive filter solution, applied to a seismic
trace in both forward and reversed time. The resulting two traces (primary wavefield without the
ghost, and ghost wavefield without the primary) are, then, combined in a nonlinear manner in order
to maximize the signal and suppress the noise trains, stabilizing the result even further. The optimum
ghost delay is estimated adaptively to the data within a sliding window, to ensure the best possible
match. This results in sharp crystal-clear seismic images with high signal-to-noise ratio. You can read
theory at the end of module description.
896
Module parameters
Ghost time-delay adaptation:
• if checked ON, the ghost time-delay will be adaptively estimated both in time and space
windows from the range, defined in Ghost time-delay interval, [ms]:
▪ Ghost time-delay interval, [ms] – select the headers with minimum and maximum
values of ghost time-delay in ms for the adaptive time-delay estimation.
▪ Time window length, [ms] – time window in ms where adaptation normalized power
will be calculated. Estimated time delays are interpolating between time window
centers
▪ Time window step, [ms] – step for time windows in ms.
▪ Trace window length – number of traces where ghost time-delay will be calculated
by adaptation normalized power. Estimated time delays are interpolating between time
window centers.
▪ Trace window step – trace windows step.
• if checked OFF, ghost single time-delay value will be used for deghosting routine:
▪ Ghost time-delay, [ms] – select the header with the ghost time-delay in ms to be used
by the deghosting algorithm
• If checked Load ghost time-delays from dataset, ghost time-delay value will be used from
a dataset. The dataset must be of the same size and sorting order as the data being processed.
With this option, now you can estimate and output ghost delays, process them to get rid of
occasional misestimates, and then load back.
• if checked ON, the ghost amplitude delay will be adaptively estimated both in time and space
windows from the range, defined in Ghost amplitude interval. Smaller amplitude values
will provide less noises while recursive filtering. Bigger amplitude values result in better
ghost subtraction. Thus, the choose of the amplitude value is a trade-off between noise level
and ghost subtraction.
▪ Ghost amplitude interval – select the headers with minimum and maximum values
of ghost amplitude for the adaptive amplitude estimation
• If checked OFF, single ghost amplitude will be used for deghosting routine:
897
▪ Ghost amplitude – select the header with the ghost amplitude to be used by the
deghosting algorithm
Output:
• Subtraction results - data with suppressed ghosts will be output as a result of the module’s
operation
• Ghost model only – ghost model will be output as a result of SharpSeis routine. This ghost
model can be subtracted afterwards by Adaptive Wavefield Subtraction module.
• Output estimated parameters – estimated ghost time-delay will appear as a trace
amplitudes. Could be used to adjust ghost time-delay.
SharpSeis application
1. Recommended sorting of input data is common channel gathers (CHAN:FFID). SharpSeis
should be applied for every single channel gather. SharpSeis can be applied for both 2D and
3D data.
2. SharpSeis should be applied before Stacking and Normal Moveout corrections.
3. Nyquist frequency of the input data should be equal (5-10)*Fmax, where Fmax is maximum
meaningful frequency of the data. Nyquist frequency can be increased by Resampling module.
4. Data should be prepared by removing direct wave arrival and strong linear noises. Top muting
is strongly recommended before SharpSeis application.
898
5. Minimum and Maximum ghost time delay should be written to the headers before application.
Ghost time delays can be initially estimated from frequency spectrum notches or directly from
data.
6. Ghost amplitude around 0.8 is a good starting value for SharSeis deghosting algorithm.
Method theory
It is well known, that the deghosting problem does not have an exact and stable solution without using
additional data from special acquisition methods (variable depth streamers, dual sensor streamers,
over-under acquisition). To show the complexity of the problem we will start from considering 1D
model of seismic trace:
z (t ) = p (t ) − p (t − θ ) (1)
where θ - ghost time-delay, p (t ) - desired ghost-free solution of the trace. Here, we assume the
reflectivity of sea surface equal “-1”. Equation (1) could be solved recursively in the following form:
Given approach is not applicable on practice as it accumulates error exponentially. One of the possible
and relatively simple way to solve the equation (2) is to use approximate solutions. For the
approximate solution, we will introduce a parameter q to the formula (2) as following:
which, strictly saying, is an infinite pulse train, decreasing exponentially with time. This amplitude
error can be decreased by considering recursive filter in the opposite time direction:
Two-way resulting filter that would be the sum of forward and reverse filters decreases amplitude
error twice:
1
𝑝̃(𝑡) = 2 (𝑝̌ (𝑡) + 𝑝̂ (𝑡)) (6)
899
This solution decreases amplitude of the error (4) but not the resulting energy. Nonlinear filtering
could be used as a modification of formula (6):
where 𝜔(𝑡) is chosen in the following manner: if 𝑝̂ (𝑡) ≈ 𝑝̌ (𝑡) on a certain time interval, than we
choose 𝜔(𝑡) equal 0.5, otherwise, 𝜔(𝑡) is inversely proportional to the RMS amplitude of 𝑝̂ (𝑡). As
the result, noisy part (pulse train) of the deghosted trace by forward filter will be replaced by the same
part “reversely” deghosted trace, which does not contain such noises. This approach would decrease
noises, while leaving good signal level.
On practice, optimum time-delay 𝜃 and q parameters are not constant both in space and in time –
adjusting of these parameters within a specified range can significantly improve the processing result.
In the implemented deghosting algorithm, the parameters are estimated adaptively to the data within
sliding windows by resolving nonlinear optimization problem.
Reference:
“Deghosting of High Resolution Marine Seismic Data by Adaptive Filtering Algorithm”, S.A.
Vakulenko, S.V. Buryak, P.A. Gofman and D.B. Finikov, Near Surface Geoscience 2014 - First
Applied Shallow Marine Geophysics Conference
900
Deghosting (Ghost wave suppression on near offsets)
This module is used to suppress ghost waves in single-channel or stacked data obtained with small
source-receiver offsets. The algorithm implemented in the module is based on adaptive subtraction
of the ghost wave model from the source wave field. The ghost wave model is created from the data
itself by means of static shifting of the original traces by the specified time. The ghost wave adaptive
subtraction algorithm is similar to the one used in the Wave Field Subtraction module and is described
in detail in the relevant section of the User Manual. The basics of the module operation are briefly
described below.
A shaping filter that minimizes the results of the ghost wave model subtraction from the original field
is constructed for each trace based on the original and model data. The filter searches for similar
reflections present both in the original data and in the ghost wave model and minimized their root
mean square amplitudes in the resulting field. Additional nonstationarity that allows the filter to adapt
to events that are similar but not identical is introduced in the process of the filter calculation.
This allows the subtraction algorithm to work efficiently to a certain degree even with approximate
ghost wave models wherein the arrival times and reflection amplitude differ somewhat from the
observed ones. However, the closer the ghost wave model is to the actually observed ghost waves,
the better it will be subtracted. Therefore, the best results can be achieved with minimum
sourcereceiver distances, since zero offset allows obtaining the most accurate model. For the same
reason the module works less efficiently when data are substantially affected by sea disturbance.
In the process of filter calculation for each individual original data trace several adjacent traces may
be used in addition to the model trace corresponding to the original data trace, since it may be found
that the specific features of the current trace are more similar to fragments of the adjacent traces in
the model. Consequently, use of adjacent traces may result in stronger subtraction. On the other hand,
filters generated for adjacent traces should not differ from each other substantially. Therefore, it may
be useful to average the filters calculated for each trace with several adjacent traces. This operation
results in a softer subtraction and allows avoiding the effect of a “hole” around the suppressed
multiple reflection.
901
Decon operator parameters – parameters of the subtraction operator
• Operator length – sets the shaping filter length in milliseconds. As a rule, the filter length
should be equal to or greater than the wavelet length. Increasing the filter length to a certain
extent leads to stronger subtraction and also increases the time needed to run the procedure.
• White noise level – regularization parameter. Increasing this parameter leads to a more stable
result and softer subtraction effect.
Ghost delay – static data shift used to generate the ghost wave models
• Constant delay (ms) – bulk shift in ms that will be introduced into the data. This value should
correspond to the difference between the reflection arrival time and the associated ghost wave
arrival time averaged across all traces.
• From Header — sets various shifts for each trace from the header field. The user needs to
fill in the header field in advance.
Use adjacent traces – inclusion of adjacent ghost wave model traces in filter calculation and
averaging of the obtained filters across several traces
• Number of traces – this parameter sets the number of adjacent traces of the ghost wave model
on each side of the current trace that will be taken into account during the filter generation. If
it is equal to zero, adjacent traces will not be used, and the filter will be built based on the
single current trace of the model. If it is equal to 1, thee traces will be used: the central one
and two adjacent ones on both sides, etc. This parameter is discussed in greater detail in the
Wave Field Subtraction module description. Increasing this parameter amplifies the
902
subtraction effect but results in a longer procedure run time. It is a good idea to start with 3
adjacent traces and then try to increase or decrease this number.
• Filter averaging base (tr) – sets the number of traces with which the calculated filters will
be averaged. Usually it is recommended to enter a relatively large number here. You can start
with 25 and work up or down from there. Increasing this parameter results in a softer
subtraction, while decreasing it makes subtraction stronger.
Band transform – this option limits the frequency range within which the operations are performed.
It allows decreasing the processing time and performing additional frequency filtering of the data.
• The Low frequency and High frequency parameters set the lower and upper boundaries of
the used frequency range, respectively.
Output ghost model – when this option is on, the module does not subtract the model from the
original wavefield but output the model itself, instead. This can be used for parameter testing and
debugging.
903
Dropped / Missed Shots Correction
When collecting the marine data, certain situations arise when the gathering system operates at the
wrong time and either produces and records the dropped shot or misses any scheduled ones. If the
navigation recording is performed separately and there exists an autonomous numbering of the shots,
it turns out that the number of the shots in the data does not correspond to the numbers in the
navigation files, which makes the geometry input more difficult.
This module is designed to coordinate the numbers of the shots in the data with their numbers in the
navigation files. The traces with the numbers of shots from the List of dropped shots are rejected
from the flow, and the following numbers are reduced. On the contrary, the numbers of the shots that
follow the numbers from the List of missed shots increase.
It is assumed that the numbers of the shots are stored in the FFID field, and all the manipulations are
made with this field only.
904
Swell Filter
The Swell Filter module calculates and enters the swell statics into the single-channel marine data.
This module can also work with multi-channel data; in this case, the data must be sorted by channels
and the flow must be performed in the frame mode so that each frame contains the data from only
one channel.
For the operation of the module, the seafloor pick must be first created and stored in any header field.
The module smoothes the pick by means of the trimmed mean method in the sliding window,
calculates the difference between the initial pick and the smoothed one and enters this difference as
the statics to the trace.
Parameters
• Seafloor pick – select the header field that contains the seafloor pick value from the drop-
down list.
• Averaging base (traces)– the width of the sliding window expressed in traces.
• Max / Min rejection (%) –the rejection percentage.
• Save statics to header – select the header in which the values of the calculated static
correction for the given trace will be written
905
Gas Hydrate Stability Zone
BSR (bottom simulating reflector) – a non-lithological reflecting boundary roughly tracing the sea
floor. In many cases, such reflections are associated with the bottom of the gas hydrate stability zone.
The module allows determining the position of the theoretical bottom of the gas hydrate stability zone
based on the specified depth, temperature and pressure and assuming that the gas composition is
100% methane. The module input data consist of a sea floor pick contained in a header. The calculated
BSR values are saved to another header.
Parameters
• Bottom depth (m) – header containing the bottom depth in meters.
• BSR depth (m) – header where the BSR depths will be saved.
• Salty water density (kg/m3) – salt water density in kg/m3
• Bulk density (kg/m3) – sediment density in kg/m3.
• Temperature on sea floor (deg C) – temperature on the sea floor in degrees Celsius.
• Termic gradient (ged/m) – thermal gradient.
Pressure is – pressure model:
906
• Hydrostatic – hydrostatic pressure model. It is assumed that the sediments are of the fluid-
supported type, and their pressure is the same as if they were a water column. The
contribution of the rocks is ignored.
• Lythostatic – lithostatic pressure model. It is assumed that the sediments are of the matrix-
supported type, and the pressure is calculated based on the sediment density.
907
HiRes Statics Calculation*
The module is designed to calculate static corrections for marine data. Corrections are applied for
changes in sea level (tides) and vertical movements of receivers.
Input data: dataset for the applying of static corrections, seafloor picking (must exist for each trace
of the dataset), water velocity of the seismic wave .
Output data: traces with calculated static corrections stored in the header specified by the user.
Seafloor topography has a similar effect on the wave pattern as the tides. Therefore, when
applying static corrections, it is necessary to take into account its influence on the data. This
is done by estimating the areal low-frequency trend of the seafloor picking. Then the trend is
908
subtracted from the data, and it is already possible to correctly calculate the corrections for
sea level change.
HiRes Statics Calculation module operation window. Panel No. 1 is responsible for the input data. Here you
need to specify the input dataset path, to which static corrections will be applied, seafloor picking, and water
velocity. Panel No. 2 is responsible for the type of corrections that are applied to the data. It is necessary to
specify the headers in which the static correction values will be recorded, as well as the statistical parameter
that will be taken into account when averaging the picking. Panel No. 3 is responsible for the correction
calculation parameters for the seafloor topography.
Parameters:
Dataset – path to the dataset to which the static corrections will be applied.
Seafloor pick – the field in which the trace header containing the seafloor picking should be specified.
Tidal statics— the field is responsible for the input of static corrections for tides.
Save tidal statics to header – a header into which the sea-level change static correction value is
saved.
909
Receiver position corrections – the field is responsible for applying static corrections for receivers
elevation changes during the survey.
Save receiver position corrections to header – a header into which the corrections for the receiver
elevation changes will be saved.
Min/max rejecton, [%] – the percentage of values that are rejected when calculating the average
time for a given shooting line (or given channel in the case of Receiver position corrections). See
"The operation principle".
Estimate and subtract seafloor topography trend – the field is responsible for correction for the
seafloor topography, which distorts the corrections for the sea level change.
• Save to header – a header into which the seafloor change correction value is saved.
• Min/max rejecton, [%] – the percentage of maximum and minimum points from the sample
that are not taken into account when calculating a low-frequency trend characterizing a flat-
lying seafloor.
• INLINE_NO mixing base, [bins] – a parameter that determines the size of the window in
Inlines for calculating the low-frequency trend for a particular point in space.
• XNLINE_NO mixing base, [bins] – a parameter that determines the size of the window in
Crosslines for calculating the low-frequency trend for a particular point in space.
ATTENTION: If the calculation of two types of static corrections is indicated in the module
parameters (for changing of the channel elevation and for the tide), first the correction will be applied
to the dataset for the sea level change, and then for the channel elevation.
910
PZ Calibration
The module is designed to suppress ghosts and reverberation waves by summing hydrophone and
geophone data after evaluating and applying phase and amplitude corrections to one of the
components of the data. Correction assessment is performed within a user-specified window based
on either Common Receiver (CR) / Common Mid-Point (CMP) pre-stack gathers or CMP stacked
sections and is recorded in two separate headers (phase correction and amplitude correction). Phase
correction -- the phase rotation is constant for all traces of each gather/stack, ensuring maximum
energy after summation. Amplitude correction -- coefficients by which seismograms of geophones
and hydrophones need to be multiplied for summation, in order to minimize reverberations.
Theory
In seismic applications, the PZ summation technique is employed to mitigate the impact of ghosts
on recordings made by a bottom station or bottom cable receiver.
The core concept revolves around the wave fields reaching the P (pressure) and Z (vertical)
components. Depending on their approach direction, these waves exhibit differing polarities. This
polarity contrast arises due to the distinction between scalar (pressure) and vector
(displacement/velocity/acceleration) dimensions. Pressure, as detected by a hydrophone, remains
direction-agnostic, whereas velocity, as registered by a geophone, varies with the wave's direction.
Consequently, assuming that the ascending wave exhibits the same polarity on both components
implies the reverse polarity for descending waves.
911
Technologies of broadband marine seismic exploration: problems and opportunities - Yu.P. Ampilov , M.L.
Vladov , M.Yu. Tokarev
Input: The input dataset consists of trace ensembles. In each CR/CMP gather or in each CMP
stack, the corresponding P and Z components of the trace appear sequentially. For instance, for a
CR gather the following sorting could be applied: REC_SLOC: FFID: COMP, where COMP is a
header indicating the trace's corresponding component.
Input requirements:
1. High-amplitude noise should be suppressed (at least within the correction estimation
window).
2. Spherical divergence correction should be applied to the data, ensuring amplitude stationarity
within the correction estimation window.
3. There should not be a significant static shift between the two components (phase shift
estimation will be performed within the range of -180 to 180 degrees).
The module can process both pre-stack seismograms and stack sections. When pre-stack
seismograms are provided along with the complete search range of phase shift angles, the phase
912
shift evaluation can be time-consuming. Therefore, the suggested workflow for pre-stack data
involves the following steps:
1) For an initial rough estimate of the phase shift magnitude, it is recommended to input stacked
sections that include alternating P and Z traces, and define the entire phase correction search
range.
2) Subsequently, you can consider the value obtained in the preceding step as the initial
approximate phase correction value, and define a narrow interval for searching the phase
correction angle for each pre-stack seismogram in its vicinity.
Module parameters
The module operates in three modes:
1. Estimate Amplitude and Phase Corrections (Write to Headers): In this mode, the module
calculates both amplitude and phase corrections and writes the results to the appropriate
headers.
2. Apply Amplitude and Phase Corrections: In this mode, the module applies the previously
calculated amplitude and phase corrections stored in the headers to the data.
3. Estimate and Apply Amplitude and Phase Corrections: This mode encompasses both the
estimation and application of amplitude and phase corrections in a single step.
913
• Phase Correction Search Interval: This refers to the range in which phase corrections are
sought for phase adjustment, aimed at maximizing energy after summation.
• Phase Correction Step: This is the incremental step used within the phase correction search
interval.
• Amplitude Correction Type - It denotes the mode used for calculating amplitude correction.
- Minimum ACF Energy - In this mode, amplitude correction is estimated based on
the minimum energy of the autocorrelation function, which signifies a reduction in
reverberations.
- Mean Absolute Value - In this mode, all traces are adjusted to a uniform amplitude
level.
• Amplitude correction search interval - the interval within which corrections will be sought
for amplitude correction, providing a minimum energy of the autocorrelation function. The
range is from 0.1 to 0.9.
• Amplitude correction step – step within the amplitude correction search interval.
• Autocorrelation length – length of the autocorrelation function
• Time interval for corrections estimation - time interval within which the magnitude of the
amplitude and phase corrections will be assessed.
• Phase correction header – header in which the phase correction values will be written.
• Amplitude correction header - header in which the amplitude correction values will be
written
914
• Use ensembles - the work is performed within ensembles. Ensembles are determined using
the headers from the Trace Input module.
• Phase correction header – select the header in which the phase correction values are written.
• Amplitude correction header - select the header in which the amplitude correction values
are written
• Use ensembles - the work is performed within ensembles. Ensembles are determined using
the headers from the Trace Input module.
When Use ensembles mode is on, header values for corrections are taken from the first trace of
each ensemble, otherwise taken from the first trace of each frame.
915
The parameters of this mode are similar to those described above. Phase and amplitude correction
values will be calculated and saved to specified headers, then the corrections will be applied to the
data.
After completing the calculation and applying amplitude and phase corrections, use the Trace
Math module for summing successive traces of P and Z components:
916
SRME group of modules
Required headers
Before using the modules from the SRME group, make sure that the following headers have been
created in the database (these headers are added automatically when creating a project in version
2015.2 or above):
If you are setting up a project in version 2015.2 or above, the necessary headers will be added
automatically. The END_ENS header is added automatically only for projects created in version
2016.1 or above.
Starting from version 2016.1, all multiple wave prediction modules have been updated with the in-
flow versions of the standalone modules. This means that the entire multiple wave prediction
procedure is now performed in a single workflow without creating any additional datasets or headers.
The theoretical aspects and operating principles of the modules remain the same. A detailed
description of the theoretical and practical basics of multiple wave suppression is provided in
“Seismic multiple removal techniques” by E. Verschuur.
The multiple wave prediction flow as well as the relevant parameters for each module are described
below.
Trace Input
The input data for the multiple wave prediction workflow consist of CDP seismograms. The data
must be pre-sorted by CDP using the Resort module (CDP:OFFSET).
Note: You can run the multiple wave prediction process for several lines at the same time, including
both parallel 2D lines and different 3D survey streamers. In this case, you need to pre-sort the data
by the R_LINE:CDP headers, where R_LINE is the header containing the streamer or profile number.
The following keys should be selected in the Trace Input module: R_LINE:CDP:OFFSET, Number
of ensemble fields = 2.
917
2D SRME Interpolation
This module is used for interpolation of data to a regular grid. A regular source and receiver grid (i.e.
a grid where each receiver point also acts as a source for another trace) is required for the multiple
wave prediction procedure.
Interpolation is applied to CDP seismograms (within the seismogram) using the step defined in the
module parameters. Missing traces are copied from the nearest available trace and shifted in time
using partial kinematics. The negative part of the seismogram is completed based on the reciprocity
principle.
ATTENTION! CDP seismograms are input into the dataset. Before interpolation, the data must be
sorted by CDP using the Resort module.
Module parameters
• Sail line header – additional sorting key that allows separating the data acquisition lines. It
can also be used as a sorting key to separate the streamers in 3D surveys (for example, header
R_LINE).
918
• Ensemble header – primary sorting key defining the ensemble in which the data will be
interpolated. It is recommended to use CDP seismograms (CDP header).
• Bin size, [m] – current dataset bin size. You must specify the bin size that was used for data
binning (for example, in Marine Geometry Input).
• Source and receiver step, [m] – distance between the source and receiver points. Since
SRME requires a regular grid, the source and receiver points must have the same step. This
value needs to be a multiple of the original CDP bin size, either upward or downward (x1, x2,
… and /2, /3, …).
• Max output offset, [m] – maximum offset (distance) by which the data will be interpolated.
• Symmetric part length, [m] – negative offset defined by a positive value (without the minus
sign).
Reference dataset
Partial NMO
After interpolation, you need to apply kinematic corrections to the interpolated traces. As a result of
interpolation, the N_OFF (new offset) header is completed in the interpolated dataset, while the
OFFSET field is retained from the original trace. This N_OFF header should now be input into
NMO/NMI as “Header with desired non-zero offset”.
919
2D SRME Prediction
This module predicts the multiple wave field based on a regular observation grid.
Parameters
Aperture – length of the negative part of the seismogram that will be used for multiple wave
prediction. It is recommended to set a value smaller than or equal to the Symmetric part length
parameter in the prediction module.
Frequency range, [Hz] – you can limit the frequency range to speed up the multiple wave
prediction process.
Sail line header - additional sorting key that allows separating the data acquisition lines. It can also
be used as a sorting key to separate the streamers in 3D surveys (for example, header R_LINE). Use
this together with the corresponding option in the prediction module.
Apply shaping filter – it is recommended to enable this option to restore the original pulse
waveform.
Skip virtual traces – if this option is selected, the module will output only those traces that are
required for restoration of the original data.
Maximum processing threads – maximum number of threads to be used for module execution.
Maximum memory usage, [MB] – maximum amount of memory to be used for module execution.
920
2D SRME Geometry Return
At the final stage of the SRME procedure, the predicted multiple wave field must be reverted to the
original dataset geometry.
In the 2D SRME Geometry Return module, select the dataset and header that you previously specified
in the Input dataset and Reference header fields in the 2D SRME Interpolation module.
This module retrieves only those traces that correspond to a source dataset trace from the workflow,
applies the original N_OFF offset to them (while regular offsets are stored in the OFFSET header),
and returns them to the workflow.
Parameters
In the 2D SRME Geometry Return module, select the dataset and header that you previously specified
in the Reference dataset and Reference header fields in the 2D SRME Interpolation module.
EXAMPLE WORKFLOW
The workflow containing the model is interpolated to a regular source-receiver grid. After that,
partial NMO-NMI is applied, the multiple model is predicted using the 2D SRME Prediction
module, and the predicted multiple model is reverted to the original geometry (only for the points
through which the source traces passed) using 2D SRME Geometry Return. The result is saved to a
dataset.
1. Trace Input
2. 2D SRME Interpolation
3. Partial NMO (NMO/NMI module) Reverts the traces to the original offsets using the
N_OFF field.
4. 2D SRME Prediction
921
5. 2D SRME Geometry Return
6. Partial NMO (NMO/NMI module)
7. Trace output – Saves the results to a dataset.
Now the multiple wave field can be subtracted using the Wavefield Subtraction module.
922
Interpolation
Profile Interpolation* (Interpolation of profile data on a regular grid)
The module is designed for data interpolation into a regular grid. This module belongs to the class of
stand alone modules, that’s why it doesn’t require any input/output modules in the flow.
Input data
The input data for the interpolation procedure are traces that reside in one or several datasets and
traces’ coordinates, which can reside in any traces’ header fields. The requirement for the input data
is an equal number of samples.
Module operation
To activate the module you have to put it into the flow and click the button Run, after that a
parameters’ dialogue appears:
923
Parameters
Add profile... adds raw data for interpolation, after having added data the name of the dataset appears
in the list
• Output dataset... allows specifying the dataset, to which the interpolation results will be
saved
• Save Template/Load Template saves/loads the templates into the database
Define regular grid... produces interactive tool for specifying regular grid
Collar specifies the size of vicinity for data retrieval at the edges of the regular grid, the size is given
in steps of a regular grid
The module operation requires the input data specification (button Add profile...) and indication of
which trace headers correspond to which x- and y-coordinates (items X Coordinate and Y
Coordinate).
After that you have to specify a regular grid for interpolation, and click the button Define regular
grid...; as a result a map window appears, where the raw profiles and a regular grid (if any) will be
displayed:
924
as well a window that displays current cursor coordinates appears:
where:
925
• Pan Image positioning of the map
• Zoom In/Zoom Out zooming in/zooming out of the display scale
• Define Grid defining regular grid parameters
• Move Grid moving regular grid
• Rotate Grid Rotating regular grid with reference to its’ origin
• OK Closing dialogue with saving parameters
• Cancel Closing dialogue without saving parameters
To specify the regular grid you have to click the button Define Grid, then parameters’ dialogue
appears: where:
926
In case of the necessity the position of the regular grid can be corrected by moving and rotating
(buttons Move Grid and Rotate Grid correspondingly).
Upon completion of the regular grid specification you should click the button OK to save the
parameters.
Then you have to specify the dataset to save the results of interpolation (button Output dataset...)
and perform the interpolation (button Interpolate).
If required you have to save / load interpolation templates (buttons Save Template/Load Template),
as after quitting the module the setup is not preserved.
The headers CDP_X, CDP_Y in the output dataset will contain coordinates in the system
corresponding to the raw data, while the headers ILINE_NO, XLINE_NO - cell numbers on Xand
Y-axis of the regular grid.
927
CCP-CMP X Interpolation
The module allows to produce CCP – CMP X Interpolation
Operating procedure
1. Select the required seismogram file from the database, specify the required headers
2. In the Cutting parameter field, set the cutting parameter that characterizes the maximum
difference between the sum of distances (A1,B) + (A2,B) and the distance (A1,A2). Traces
A1 and A2 are the closest traces in the flow to trace B, the trace in the database.
• No Cutting – the cutting parameter is ignored (the result of interpolation will be written in B,
regardless of its distance from the near traces).
• Custom (get closer trace) – if the difference between the sum of distances (A1,B) + (A2,B)
and the distance (A1,A2) is greater than the cutting parameter, then the closest trace from the
flow to it will be written in B.
• Custom (miss the trace) – if the difference between the sum of distances (A1, B) + (A2, B)
and the distance (A1, A2) is greater than the cutting parameter, then nothing will be written
in B.
928
Parameters
• Dataset – data set selection field. Click Browse, in the dialog box that opens, select the
required seismogram file from the database.
• Cutting parameter – parameter for cutting.
• Cutting Mode – cutting mode, see Operating procedure:
• Matching field – header from which the value will be selected for iterating traces.
• In fields Geometry fields (X coordinate и Y coordinate) header fields must be specified,
from which trace coordinates will be selected for each field value Matching Field.
929
Spatial Interpolation (previous name – X Interpolation)
This module performs linear interpolation of a dataset in the horizontal direction with a regular step.
When the module is launched, the following dialog box appears:
Module parameters
• Distance header – header containing the cumulative distances between the traces and the
beginning of the profile. These values can be calculated using the Compute Line Length
module.
• New dl – value that defines the step between the output traces. It must be specified in meters.
930
3D Volume Zero-Padding*
The stand-alone module analyses input dataset and ads zero-padded traces to every empty cell of the
specified in-line/cross-line grid. The result is saved to another dataset
The module calculates ILINE_NO, XLINE_NO and CDP values for each trace in the dataset. The
CDP bins, which do not contain any traces from the input dataset, are marked as empty. A null trace
is generated for them with the ILINE_NO, XLINE_NO, CDP_X and CDP_Y headers, according to
the center of empty bin.
Parameters
X coordinate/ Y coordinate – specify the headers that contain the coordinates of the trace's midpoint.
Empty bin marker – specify a header for the traces of the output dataset, which indicates whether
the trace is new (value=0) or belongs to the original dataset (value=1).
NOTE: by default traces from the input dataset are marked TRC_TYPE=1, new traces added to
empty CDP bins are marked as TRC_TYPE=0.
931
Grid – set the parameters of the binning grid:
• Origin X и Origin Y – coordinates of the beginning of the grid (bottom left corner).
• Cell size X – the horizontal size of the cell (bin).
• Cell size Y – the vertical size of the cell (bin).
• Origin Iline No и Origin Xline No – initial numbers of the inlines and crosslines.
932
Refraction
Travel Time Inversion*
The Travel Time inversion module is used to process curved-path refraction first arrival materials
using the Herglotz-Wiechert (HW) method as well as the tomographic approach.
Travel Time inversion is a so-called standalone module, i.e. a module that generates the flow by
itself.
All data processing within the module takes place within a certain processing scheme. A scheme is a
combination of first arrival picks, processing and visualization parameters, and the resulting velocity
model. Each scheme is stored in a separate directory within the RadExPro project.
When execution of the flow is started (Run), the following dialog boxes will appear: Scheme
Parameters and Travel Time Inversion.
Scheme Parameters – this dialog box is used to add first arrival picks to the scheme and loading
topography.
933
The left half of the dialog box contains a viewer means for the project database containing the picks
necessary to run the module.
A list of picks added by the user to the scheme is shown in the right half of the dialog box.
Picks are added to the scheme by double-clicking their name in the project database or clicking the
>> button.
To delete a pick from the scheme, select it from the list and press delete on the keyboard or click
the << button in the dialog box.
To add a topography line, click the Load relief… button and select a relief-defining pick linked to
the REC_X header field.
IMPORTANT: For the module to run successfully, the first arrival picks should be linked to the
SOU_X and REC_X header fields, and the relief-defining pick should be linked to the REC_X header
field. To make sure this is the case, press the Pick headers button in the Save pick dialog box when
934
saving the pick. A new dialog box with two header lists will open, allowing you to select the necessary
fields.
One pick can contain first arrival curves obtained from several shot points (SP). In this case the pick
is divided into flank segments when it is imported into the project. Each segment corresponds to one
SP and has a unique name.
Travel Time Inversion – the primary working window of the module: the upper part of the window
shows the observed travel time curves (in red), theoretical travel time curves (in blue), and source
position (in pink); the velocity model, rays, and velocity values for specific points are shown in the
lower part of the window.
935
Scheme
This menu opens the Scheme Parameters dialog box used to edit the list of picks within the
scheme and the topography.
Picks
936
Editing picks
To start editing the picks, select the Edit Hodographs menu item or click the button on the
toolbar.
• Adding a point. Place the mouse cursor over the spot where a new point is to be created and
press the left mouse button (МВ1).
• Moving a point. Place the mouse cursor over the point to be moved, press and hold the right
mouse button (МВ2) and drag the point to a new time (hold Ctrl key to move the point
horizontally as well)
• Deleting a point. Place the mouse cursor over the point to be deleted and double-click the
right mouse button (МВ2).
• Deleting several points. While pressing the Shift key, press and hold the left mouse button
(MB1) and select a rectangular area containing all points to be deleted.
• Selecting a pick. Move the cursor over the pick, press the Ctrl key and the left mouse button
(MB1). Press Tab to cycle through the picks.
• Moving a pick. Place the mouse cursor over the pick, press Shift, then press the right
mouse button and move the pick while holding down МВ2+Shift.
Deleting picks
To delete a pick, first select it and then select Delete Pick from the Picks menu.
Saving picks
To save a pick, first select it and then select Save Pick to DB… from the Picks menu. A save dialog
box will appear allowing you to specify the pick name and project database section where the pick
will be saved.
To save theoretical and observed picks to a separate project database object, select Save All Theor
Picks to DB… or Save All Data Picks to DB… menu item for theoretical and observed picks,
937
respectively. A save dialog box will appear allowing you to specify the pick set name and project
database section where the set will be saved.
Pick smoothing
This procedure can be applied only to observed picks (either to an individual pick or to all picks).
To perform individual pick smoothing, select it in the editing mode and then select the Smooth
Pick…menu item or click the button on the toolbar.
The Smooth base parameter is used to specify the averaging window length in meters.
To perform smoothing for all observed picks, select the Smooth All Picks…menu item and specify
the Smooth base parameter.
Model
• Edit the palette used to display the velocity model (Edit Palette…)
• Load a velocity model in the GRD format (Load Model…)
938
• Save a velocity model in the GRD format (Save Model…)
• Specify a gradient velocity model manually (Define Gradient Model…)
• Extrapolate a velocity model to the edges of the sources and receivers (Extrapolate
Model…)
• Export a velocity model as a text file (Export Model to ASCII…)
• Enable/disable the display of velocity values obtained using the Herglotz-Wiechert (HW)
method (Show H-W values…)
Select the Model/Edit Palette…menu option. The palette setup dialog box will open.
Working with palettes in this module is generally similar to working with palettes in the Screen
Display module and is described in the RadExPro Plus 3.95 User Manual. The only difference is
the Palette limits field allowing the user to define the velocity value range based on the data (Get
limits from data) or manually specify the minimum (Lower limit) and maximum (Higher limit)
values.
Select the Model/ Load Model… menu option. A file selection dialog box will open:
939
To load a velocity model, select the necessary GRD file in the dialog box.
Select the Model/ Save Model… menu option. A file selection dialog box will open:
Specify the name of the file to which you want to save the model.
Select the Model/ Save Gradient Model… menu option. The gradient velocity model setup dialog
box will open.
940
• XMin: Х – velocity model start coordinate
• XMax: X – velocity model end coordinate
• Dx: velocity model cell width
• Depth: velocity model depth
• Dy: velocity model cell height
• Vmin: velocity model initial velocity
• Vmax: velocity model terminal velocity
Set all necessary parameters and click OK.
Select the Model/ Extrapolate Model… menu option. The velocity model extrapolation setup
dialog box will appear.
Specify the desired minimum and maximum X coordinates of the model in meters in the Xmin and
Xmax fields.
941
Exporting a velocity model as a text file
Select the Model/ Export Model to ASCII…menu option. In the file save dialog specify the name
of the file to which you want to save the model.
Enabling/disabling the display of velocity values obtained using the Herglotz-Wiechert (HW)
method
The Model/ Show H-W values… menu option enables or disables display of values generated
using the HW method (if any).
Select the Model/ Smooth Model… menu option or press the button on the toolbar.
942
View
The View menu option opens a submenu containing the following commands:
Zoom In – increase the current zoom level. This option works similarly to the Zoom/Set
Zoom menu option of the Screen Display module and is described in the RadExPro Plus 3.95 User
Manual.
Zoom Out – decrease the current menu level (this option works similarly to the
Zoom/Unzoom menu option of the Screen Display module and is described in the RadExPro Plus
3.95 User Manual)
Fit to Scale – display the image in the “original” size (as set by the Set Scale option).
943
Select the View/ Set Scale menu option or click the button on the toolbar. The scale setup dialog
box will open, allowing the user to specify the horizontal Horisontal scale and Vertical scale for
travel time curves (Traveltime) and the velocity model (Model).
Two methods can be used to solve inverse problems: the HW method (Gerglotz-Wiechert
Inversion) and the tomographic method (Tomography).
Load the observed travel time curves from the RadExPro Plus database using the Scheme
Parameters dialog box as described above.
Select Herglotz-Wiechert Inversion in the Inversion field as the inverse problem solution method.
To set the procedure parameters, press the Parameters… button. This will open the setup dialog box:
944
This dialog box allows the user to specify the boundaries of the model (Grid size) that will be
created when the results obtained using the HW method are interpolated to a regular grid.
Filters… opens a dialog box allowing the user to limit the output data range
If the filter is on, the values outside of the specified range will not be taken for griding.
To run the procedure, click the Calculate button. A dialog box will appear (VXH Grid):
945
The calculation results are presented as a table with the first three columns containing X coordinate
values in meters, Z coordinate values in meters, and velocity values in meters per second. The fourth
column allows specifying whether the current value will be used in interpolation to a regular grid.
Editing cells
If necessary, the results can be edited. To change the value in a cell, double-click the left mouse
button on the cell to be edited or place the cursor over it and press Enter key on the keyboard. To
exit the editing mode and save the results, press the Enter key. If you do not want to save the results
of editing, press Esc key.
Export… opens a dialog box allowing the user to select an ASCII file name to which the HW
calculation results will be exported.
Adjust parameters… opens the interpolation grid setup dialog box (same as the VXH setup dialog
box).
Set all necessary parameters and click the OK button. The velocity model built will be displayed built
in the lower part of the screen.
Tomography
946
This procedure requires a source velocity modеl. It can be obtained using the HW method, specified
manually (see Specifying a gradient velocity mode) or loaded from an external GRD file. No
parameters are needed to run this procedure – just press the Calculate button. If you see the following
message,
it means that before running the procedure, you need to extrapolate the velocity model to make its
size along the X axis equal to or larger than the source data receiver range.
Direct problem solution is based on eikonal equation solution and is performed in one of two modes:
simple theoretical travel time curve building (the Eikonal equation mode) and theoretical travel time
curve building with ray tracing (the Raytracing mode). A velocity model is necessary to solve the
direct problem.
Analysis of observed travel time curve deviation from theoretical travel time curves
(Deviation analysis)
To calculate standard deviation of observed travel time curves from theoretical ones select the Picks/
Deviation analysis menu item. A dialog box containing a table with two columns will appear:
SOU_X – shot point coordinate and STD Dev – standard travel time curve deviation.
947
When a table line is selected using the mouse cursor, the travel time curves corresponding to the
shot point in the SOU_X field in that line will be highlighted in the Travel Time Inversion
working window.
948
Easy Refraction* (Processing of seismic refraction data)
(This module was developed in cooperation with “Geometria” company)
The module allows processing first arrival curves and building refraction boundaries using the
delay-time (classical reciprocal) method.
Easy Refraction is a standalone module, i.e. it shall be the only module in the flow.
All data processing within the module takes place within a certain processing scheme, which is a
collection of the time-curves, parameters, and a resulting model. When you add the module into the
flow, you will be asked to select a scheme.
Click the Browse button to either select an existing scheme or enter a name of the new one.
Status bar
The travel time curve panel is divided into two parts: the window where the user works with travel
time curves and the tree containing all loaded travel time curves grouped by their source positions.
949
The model panel is also divided into two parts: the window where the refraction boundaries, day
surface relief and wave velocities are shown and the tree containing all refraction boundaries that
have been built.
The travel time curve section and model section are synchronized on the X axis in such a way that
any scale change or panning in one of the windows results in the same change in another window.
Travel time curves and refraction boundaries are shown as checkboxes in the object tree. If a
checkbox is ticked, the corresponding travel time curve/boundary is shown on the screen, if not – it
is hidden.
The module allows selecting travel time curves using ether the left or the right mouse button; the
selected travel time curves are highlighted in red and blue, respectively. To select, press and hold (!)
the corresponding mouse button. If travel time curves have reciprocal points, the reciprocal point time
determined by the red travel time curve is shown as two crosshairs on the screen.
The travel time curve section is divided into two parts: the window where travel time curves are
shown and the tree containing all loaded travel time curves associated with their source positions.
The model section contains a tree of all refraction boundaries that have been built.
The module allows selecting travel time curves using either the left or the right mouse button; the
selected travel time curves are highlighted in red and blue, respectively. If two travel time curves are
selected and they have reciprocal points, the reciprocal point time determined by the red travel time
curve is shown as two crosshairs on the screen. The status bar at the bottom also shows the value of
the time difference at the reciprocal points.
The status bar in the main window of the Easy Refraction module shows the total number of the
loaded travel time curves, the X and Y coordinates of the current cursor position, and the difference
between the reciprocal times of the two selected travel time curves (if the selected travel time curves
have any reciprocal points).
The Easy Refraction module has two different types of context menus opened by single-clicking the
right mouse button in the travel time curve editing window. The first type appears if no travel time
curve (red) was previously selected with the left mouse button:
950
If a travel time curve was selected before opening the context menu, the latter will look as follows:
The context menu items are the same as the ones in the Time Curves menu accessed through the
main menu of the Easy Refraction module and will be described in detail further in this document.
Hotkeys
Hotkey: Action:
951
Delete Delete the selected curve
RMB Click Open the context menu in the travel time curve editing window
Shift + RMB + Move Move the point on the selected travel time curve
952
Ctrl + 1, 2, …, 9 Show/hide layer 1, 2, …, 9
Ctrl + 0 Show/hide travel time sections that are not associated with any layer
LMB, MMB, RMB – left, middle, and right mouse button, respectively
Up, Left, Right, Down – up, left, right, and down arrows
Menu items
File
The File menu item is used to import/export data (travel time curves, boundaries, relief, velocities)
to/from the Easy Refraction module.
The module allows exchanging data both inside the RadExPro project and with external sources.
The module allows saving/loading the current project that contains all information on the travel
time curves, boundaries, relief and velocities obtained in the course of working with the module.
− travel time curves in the ST format (for further processing in the Zond2DST
seismic tomography program).
Clicking the Load from RadExPro DB menu item opens the following dialog box:
953
All current project flows are shown on the left; the picks selected for import into the Easy
Refraction module are shown on the right. To import picks saved in the project into the Easy
Refraction module, select them in the left part of the window and press the >> button.
First arrival curves must be saved with correctly completed SOU_X and REC_X headers to enable
their processing in the Easy Refraction module.
Import
Clicking the Import menu item opens a dialog box that allows importing data from external files
(outside the RadExPro project):
The following input data types can be selected in this dialog box:
Easy Refraction module project with the *.erproj extension. This format contains all information
on travel time curves, boundaries, relief and velocities obtained in the course of working with the
module.
File with the *.erf extension. Data are stored in a binary format. This file type is created by the Easy
Refraction module for the purposes of data exchange between projects. The file contains travel time
curves with selected sections associated with different refraction boundaries in the Easy Refraction
module format.
• Time curves
954
File in the ASCII format consisting of three columns separated by spaces, colons or tabs. The first
column is the X coordinate of the source, the second one is the X coordinate of the receiver, and the
third one is the time.
File in the ASCII format containing information on refraction boundary positions, relief and
velocities as plain text.
Export
This menu item is used to export data for further processing in other applications.
Easy Refraction module project with the *.erproj extension. This format contains all information on
travel time curves, boundaries, relief and velocities obtained in the course of working with the
module.
File with the *.erf extension. Data are stored in a binary format. The file contains first arrival curves
with selected sections associated with different refraction boundaries in the Easy Refraction module
format.
• Autocad DXF
File in the DXF format containing refraction boundary positions and a relief line for further
formatting of the results in AutoCAD.
File in the ASCII format containing text information on refraction boundary positions, relief and
velocities in the form of a table.
• Time curves
Files in the ASCII format containing three columns – source coordinates, receiver coordinates and
first arrival times.
955
• File in the ASCII format containing information on refraction boundary positions, relief and
velocities as plain text.
Zond2DST
File in the ST format serving as an input file for the Zond2DST seismic tomography program.
Clear project
This menu item deletes all data from the current project.
View
The View menu item is used to hide/show the travel time curve and model panels and select the
travel time curve display color scheme.
This menu item is used to hide/show the travel time curve section:
Model section
956
Color settings
Clicking this menu item opens a dialog box that allows setting up the color scheme for the travel
time curve section:
Background color for the travel time curve section (black by default).
• Marker Colors
Colors used to display travel time curve sections associated with different refraction boundaries
(1 through 9).
Time curves
The Time curves menu item is used to edit travel time curves.
957
Edit
Select this menu item to edit the travel time curve. Clicking it opens the editing window for the
selected travel time curve:
The travel time curve editing window can be divided into three main sections:
The travel time curve visualization section dynamically displays changes applied to the travel time
curve. In terms of functionality it is the same as the travel time curve section of the Easy Refraction
module main window.
The table shows the coordinates of the points on the travel time curve. These coordinates can be
changed manually, resulting in the travel time curve being redrawn.
The travel time curve editing panel contains the following buttons and fields:
• Add
958
Adds a point to the travel time curve (at the end of the coordinate table).
• Insert
Inserts a point into the travel time curve (the point is added between the selected line in the
coordinate table and the next line).
• Remove
• X Shift
Shifts the travel time curve by a constant value along the horizontal axis (coordinate axis).
• T Shift
Shifts the travel time curve by a constant value along the vertical axis (time axis).
• Smooth curve
Smoothes the travel time curve with a floating window by three points.
• Curve name
• Source X
959
Interpolate
This menu item is used to interpolate the entire travel time curve system to a new interval by
receiver positions.
When this menu item is selected, a dialog box pops up, warning the user that all layer markings will
be reset:
960
• None
No interpolation.
• Linear
Linear interpolation.
• Cubic Spline
• Fixed X
• Step X
Color mode
When this mode is active, each travel time curve in the list is displayed with a different color.
961
Delete
Delete all
This menu item allows building the difference between the two selected travel time curves to
evaluate refraction in the medium.
Shift to zero
This menu item adjusts all first arrival time values so that the first arrival time on the source
coordinate becomes zero.
962
Marker
Highlights travel time curve parts associated with different refraction boundaries. Up to 9 refraction
boundaries can be built in the Easy Refraction module.
Smooth curve
Smoothes the travel time curve with a floating window by three points.
Selecting this menu item opens a dialog box prompting the user to save or overwrite the source
travel time curve:
963
Duplicate curve
This menu item creates a copy of the selected travel time curve.
Mirror curve
This menu item creates a mirror copy of the selected travel time curve.
964
Refraction surfaces
Delete all
Relief
Import
Imports the relief line from an external ASCII file consisting of two columns separated with spaces,
colons or tabs. The first column is the X coordinate and the second one is the absolute relief
elevation.
• Export
• Clear
If the Keep aspect ratio mode is selected, 1:1 scale is maintained for the X coordinate and the
depth when performing scaling in the model section.
Inversion
This section contains inverse problem solving methods for the seismic refraction method.
Reciprocal method
The delay-time (classical reciprocal) method for solving refracted wave inverse problems.
Automatic inversion
Automatic inverse problem solving using the delay-time method. When automatic mode is selected,
the processing sequence is as follows:
965
− building of a residual travel time curve and a t0 curve,
− building of refraction boundaries.
If refraction boundaries cannot be built automatically due to the survey system complexity, they can
be created manually by following all the procedures step-by-step.
Composite travel time curve building procedure. Composite travel time curves are built based on
the sections of the same color (associated with the same refraction boundary):
This menu item levels two selected travel time curves by the reciprocal time. If travel time curves
have no reciprocal points, leveling is not performed.
This menu item builds a residual travel time curve and a t0 curve for further analysis of the velocity
under the refracting surface and building of refraction boundaries. Travel time curves are created
automatically based on two selected (!) composite travel time curves.
966
If the two selected composite travel time curves have no reciprocal points, the reciprocal time for
calculation of the residual travel time curve and the t0 travel time curve is entered manually:
Refraction surfaces
Refraction boundary building procedure. To build a single refraction boundary, select the t0 travel
time curve by left-clicking it and the residual travel time curve by right-clicking it. Enter the value
of the velocity in the overlying stratum in the dialog box.
967
The velocity below the boundary can be calculated using automatic selection of the interval for
determination of the velocity by the residual travel time curve and the averaging base (Automatic
mode). Alternatively, these parameters can be entered manually.
GRM
A detailed description of this method can be found in The generalized reciprocal method of seismic
refraction interpretation (D. Palmer, 1980).
The generalized reciprocal method is a generalized version of the standard method of t0, the
description of which can be found in Seismic Survey (Gurvich, 1975).
The residual hodographs and Т0 hodographs in the generalized reciprocal method, according to
[Palmer, 1980], are defined as follows:
where V2 – apparent velocity in the refracting layer, determined according to the residual
hodograph.
Fig. 1 shows the beam path conform to the above definitions. Fig. 2 displays the beam path,
relevant to conclusions of the hodograph equation using the method Т0.
Fig. 1. The beam paths that define the Т0 Fig. 2. The beam paths that define the Т0
hodographs and the residual hodographs in hodographs and the residual hodographs for
the conjugated point method the Т0 method.
968
In contrast to the Т0 method, in the definitios of Т0 hodographs and residual hodographs is additionally
used the ХУ distance called conjunction base – the distance between exit points of beams to the
surface. The idea of the method is to find such a XY distance that the beams on the refractive surface
will exit from a single point. In this case, in the calculation of the reflector depth we do not use the
assumption of a flat part of the boundary between the points C and E (Fig. 2), which in certain cases
can improve the accuracy of determining the reflector depth.
As can be seen from Fig. 1 and 2, the Т0 method is a special case of the conjugated points method
with XY = 0. XY in which the beams exit to the surface from a single point of the refractor is called
optimal. Selection of the optimal distance XY is one of the key moments in the method of conjugated
points.
According to [Palmer, 1980] in order to determine the optimal XY distance, it is necessary to build a
series of residual hodographs and Т0 hodographs with different XY distances:
The optimal distance XY corresponds to a residual hodograph with maximum smoothness and a T0
hodograph with minimal smoothness. In practice, the optimal distance XY determination is the main
and the most difficult task of the method.
969
Defining the locations of the change in velocity of the refractor
One of the generalized reciprocal method implementation features is the ability to specify the
locations of sharp changes in the velocity of the refractive formation based on characteristic changes
in the slope of the residual hodograph (the previous figure shows the characteristic bend of the
residual hodograph corresponding to a change in the velocity of the refractive layer).
After determining the velocity difference in the refractor by the hodograph, the refractor depth is
calculated by:
Parameters description:
To start the interpretation by GRM method, it is necessary to select the layers and choose
Interpretation GRM
970
The upper left corner shows the «table of refractors» – the number of refractors strictly
corresponding to the number of the refractors allocated by the user in the original window with the
hodograph.
The lower windows show a T0 curves family of (time-depth functions) and residual hodographs
(velocity analysis functions) corresponding to different XY distances and the velocity law for each
of the refractors. The colour of the plots corresponds to the colour assigned to this refractor in the
original window with the hodograph.
There is a slider to the right of the T0 curve families and residual hodographs which allows you to
interactively modify the current XY and track the curve it corresponds to. The value of the current
XY appears in the upper right corner of each window, and the curve corresponding to this XY is
highlighted in blue and red for a T0 hodograph and a residual hodograph, respectively.
According to [Palmer, 1980], to select the optimum XY from a T0 hodograph family, the user
should choose the smoothest one, and from the residual hodograph family – the least smooth one.
Interactive opportunity to change the current XY slider helps to visually assess the degree of
smoothness of the curves.The family of curves is given in the table of refractors for each layer
separately.
971
Min XY – XY distance with the minimum value to built T0 hodographs and residual hodographs.
Max XY – XY distance with the maximum value to built T0 hodographs and residual hodographs.
XY Inc. – XY increment.
Theor XY – theoretical XY which was calculated automatically (the calculation of the smooth
curve function).
XY – current XY which will be used in the construction of the border. This field is correlated with
the current XY for each of the family – it changes with the motion of the slider, and vice versa –
when the value of the corresponding curve in the table is highlighted, then XY changes.
It should be noted that the theoretical XY in some cases may give incorrect results because the
calculation of the smoothness can affect, for example, pips in pickings.
972
Travel Time Tomography* (first break seismic tomography)
Theoretical basis
Objective
The module is designed for seismic ray tomography based on first breaks of seismic waves.
Tomography is a classical non-linear inverse problem: we need to determine the properties of the
medium from the observation data.
There are two approaches to solving inverse problems: deterministic and stochastic. The program
uses the deterministic approach.
To find the solution, the medium is divided into rectangular cells. Velocity is assumed to be
constant within each cell. The problem is solved by finding the velocity values for each cell.
Input data
1. A set of first break travel time curves obtained in a certain observation system: 𝑇𝑖𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 .
2. Various known data, such as borehole data, boundaries and velocities obtained using other
methods as well as general information about the cross-section. All known data are
transformed into the initial velocity model Sjini and the confidence parameter which
represents the degree of the seismic interpreter’s confidence in the model.
Output data
Distribution of seismic wave velocities in the medium consistent with the observation data and the
known information. In our case, it is the velocity value in each cell.
Method description
The inverse problem is solved using the deterministic approach by successively updating the
velocity model in such a manner as to reduce the discrepancy between the observed and calculated
travel time curves: ‖𝑇 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 − 𝑇 𝑐𝑎𝑙𝑐 ‖ → 𝑚𝑖𝑛
The solution is found using the successive approximation method. The following operations are
performed during each iteration:
973
3) Model correction ∆𝑆 𝑘 is calculated based on discrepancy ‖𝑇 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 − 𝑇 𝑘 ‖.
4) The updated model is generated: 𝑆 𝑘+1 = 𝑆 𝑘 + ∆𝑆 𝑘
Regularization
Unfortunately, the above solution of the tomography problem is not unique: there are an infinite
number of models that are consistent with the observation data. In addition, the problem is ill-
conditioned in general: even minor variations in the input data can cause major changes in the
solution. Therefore, the tomography problem is ill-posed in the sense of Hadamard, as all inverse
problems in geophysics are.
Where Φ𝑑 is a quadratic functional of discrepancy between the observed and calculated values:
Weight matrix 𝕎𝑚 defines the nature of the regularizing functional. The specific type of the 𝕎𝑚
matrix implemented in the program will be described in the Regularizing functionals section.
Functional minimization
Functional Φ is minimized iteratively using one of the gradient methods. During each iteration, a
model correction is calculated in such a manner so that the functional value would be steadily
reduced from iteration to iteration. The program uses the conjugate gradients method [2].
974
Regularizing functionals
Tikhonov’s regularization can be interpreted as narrowing down the class of models for which the
solution of the original problem is found. The regularizing functional determines the way in which
the model class is narrowed down.
One of the most commonly used regularizing functionals is the differentiating functional. Indeed,
application of the differentiating functional adds a limitation on “non-smoothness” of the model and
results in reformulation of the original inverse problem in the following form: find a solution
consistent with the observation data with the specified accuracy in a class of smooth models.
Sometimes solution of the inverse problem in a smooth model class is called Occam’s inversion [3].
For first-order differentiating functional
Φ𝑚 = ‖𝕎𝑚 𝑆 𝑐𝑢𝑟 ‖2
0
−1 1
𝕎𝑚 = −1 1
…
[ −1 1]
Since smoothness of the cross-section can vary depending on the direction, the functional can be
conveniently divided into two parts: one for horizontal smoothness and one for vertical smoothness.
The mutual contribution of these parts is controlled by the ratio parameter:
𝑟𝑎𝑡𝑖𝑜 1
𝛼𝑥 = 𝛼 ∗ , 𝛼𝑧 = 𝛼 ∗
1 + 𝑟𝑎𝑡𝑖𝑜 1 + 𝑟𝑎𝑡𝑖𝑜
In addition to the stabilizing functional for model smoothness, a stabilizing functional for proximity
of the solution to the “a priori” model is implemented in the program:
𝛽 2
Φ𝑚 = ‖𝕎𝑚𝛽 (𝑆 𝑐𝑢𝑟 − 𝑆 𝑎𝑝𝑟 )‖
Where 𝕎𝑚𝛽 is the weight matrix representing the seismic interpreter’s confidence in the “a priori”
model. For example, for model cells located near the borehole, confidence will be high and the
values of the corresponding 𝕎𝑚𝛽 elements will be close to 1.
975
The final total minimized functional looks as follows:
2
Φ = ‖𝕎𝑑 (𝑇 𝑐𝑢𝑟 − 𝑇 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 )‖2 + 𝛽‖𝕎𝑚𝛽 (𝑆 𝑐𝑢𝑟 − 𝑆 𝑎𝑝𝑟 )‖ + 𝛼𝑥 ‖𝕎𝑚𝑥 𝑆 𝑐𝑢𝑟 ‖2 + 𝛼𝑧 ‖𝕎𝑚𝑧 𝑆 𝑐𝑢𝑟 ‖2
The contribution of various stabilizing functionals can be controlled using the в 𝛼𝑥 , 𝛼𝑧 and 𝛽
coefficients.
Forward problem
During each iteration of the inverse problem solution, a forward problem is solved: the arrival times
are calculated based on the current velocity model.
In this module, the forward problem solution is implemented using the “Shortest path” method of
the graph theory. The method works as follows. First, a graph is built where the graph vertices
represent the medium points and the graph edges connecting the nodes are the paths through which
rays can propagate from point to point in the medium. The grid nodes are added to the graph as
vertices. In addition to the nodal points, intermediate points located between the grid nodes are
added to the graph as vertices to improve the forward problem solution accuracy. After that, the
adjacent graph vertices are connected with edges. The weight of each edge corresponds to the ray
propagation time between these two points in space. In this case, finding the ray corresponding to
the first wave break is equivalent to finding the shorted path from the source vertex to the receiver
vertex on the weighted graph. The time on this shortest path will be the first break time [4].
If the velocity model is changed, the rays computed during the next iteration may propagate through
totally different model cells. Therefore, the tomography problem is strictly non-linear. The iterative
approach is used to address the complications associated with the non-linearity of the problem. In
addition, it is generally assumed that model changes in the cells covered by the rays make a larger
contribution compared to changes in individual ray paths. To avoid abrupt changes in the model, an
additional limitation on the maximum velocity variation in each cell per iteration can be applied.
Reference:
[2] Zhdanov M. S. Geophysical Inverse Theory and Regularization Problems. — Elsevier, 2002
976
[3] Constable S.C., Parker R.L., Constable C.G. 1987, Occam’s inversion: A practical algorithm for
generating smooth models from electromagnetic sounding data: Geophysics, 52, 289-600.
[4] Moser T.J. 1991, Shortest path calculation of seismic rays: Geophysics, 56, 59-67.
977
Travel Time Tomography module description*
This is a standalone module, i.e. it does not require any input or output modules, and must be the
only module in the workflow.
The module is designed to solve the forward and inverse problems of seismic ray tomography.
The cell depth should not be too small. It is selected experimentally based on the objectives and the
execution time of the forward and inverse problem algorithms.
In addition to velocities, each cell has a confidence parameter which represents the user’s confidence
in the velocity specified for that cell. By default, this parameter is set to 0. The value of 1 corresponds
to absolute confidence in the specified velocity. The confidence value set for the basic model is the
background confidence level, and is equal to 0 by default.
You can edit the cell values manually or select a group of cells and define a velocity gradient. In
addition, you can use layers and boreholes when creating the initial model.
The program allows using known data about layers and boreholes to create the initial model. Two
parameters must be defined for each known data object: velocity and confidence.
For boreholes, velocity and confidence are applied within the specified Range of the borehole (the
Range value is defined when creating the borehole). Velocities specified for the borehole remain
constant within the entire Range. The confidence values propagate as follows:
978
• In the immediate vicinity of the borehole, the confidence value specified at the time of
creation of the borehole will apply,
• At the maximum Range from the borehole, the background confidence value will apply,
• For all intermediate points, the confidence values will be calculated linearly.
Layers are defined using boundaries. The specified velocity and confidence are applied to cells above
the boundary, and remain unchanged up to the surface or to the overlying layer.
After adding the known data (layers and boreholes), you need to convert them to an initial grid model
that will be input into the tomography workflow. To do this, press the Build Model button in the
Model menu.
4. Run inversion
Configure the inversion parameters (Inversions menu).
Finding the solution to the problem involves minimizing the total functional consisting of the portion
that represents the discrepancy between the theoretical and observed travel time curves and the
stabilizer which, in turn, is divided into 2 parts: one corresponding to the smoothness, and the other
one – to the confidence in the initial model. The Smoothness and Confidence parameters allow
controlling the contribution of each of those parts to the total functional. They are selected empirically
– for example, by running a single iteration and checking the Stabilizer and Functional values on the
graph using the Show Status command.
The Functional graph shows the total functional values. The Stabilizer is the combined contribution
of all stabilizing functionals (Smoothness and Confidence) to the total functional. The stabilizer’s
contribution to the total functional should be sufficiently large, i.e. the values on the axes must be at
least of the same order of magnitude. To change the contribution of one of the stabilizing functionals,
you need to calculate the ratio of the Functional – add formula – to the total functional.
During the inversion, each intermediate stage is saved as an iteration in the inversion tree. After
completion of the inversion, you can view the model and the rays for each iteration separately.
If the functional is not reduced in the process of inversion (i.e. there is no convergence) or if the
calculation results are unsatisfactory, you can stop the iterative process. At this time, you can
modify the inversion parameters or even the current model and then resume the process with the
new parameters. You can also restart the iterative process from the beginning. In the latter case, the
979
outcome of any of the already completed iterations can be used as the initial model (with changes if
necessary).
Module parameters
Specify the module “Scheme” in the module parameters dialog box. This is the RadExPro project
database object where all module settings will be stored. You can create a new scheme or select an
existing one.
980
Menu. Description of the main module commands
Main Menu. Loading and saving data
Data tab
Create a new tomography project. Before a new project is created, you will be prompted to
save the existing project.
Import. Pressing this button opens a dialog box that allows loading data in the following
formats into the Travel Time Tomography module:
981
In the process of data loading, a dialog box titled “Treat ELEV data as topography?” will appear.
Here you can set the Datum value to shift the topography by a certain number. The current depth
values will be shifted by the specified value.
Open. This command allows loading travel time curves saved as picks in the RadExPro
database.
Print.
When data loading is completed, the list of loaded travel time curves will appear on the right side
of the objects pane under Observed Data.
The Objects pane contains information about the objects included in the project. The following data
types are supported: Observed data, Theoretical data, Inversions, Boreholes, and Layers.
982
The Observed data field allows performing the following actions with the objects: deleting, editing,
selecting and unselecting a group of objects. If you need to delete several curves, you can select them
using the Shift or Ctrl key (similar to how selection works in the Windows folders). To select all,
press Ctrl-A and check the box next to any object. This will select all objects. To unselect all, press
Ctrl-A and uncheck the box next to any object.
View tab
The View tab contains the image viewing commands. The following functions are available:
Zoom in, Zoom out – allows zooming in or out of the image. These functions work
with the mouse scroll wheel – zooming is performed at the current cursor position.
983
Edit tab
These are the undo and redo buttons for actions performed with the travel time curves:
editing, deleting, moving etc.
Visibility tab
Selecting the Edit button enables editing of the loaded travel time curves.
Delete – delete the travel time curves selected in the object tree.
Delete Points – delete the selected points in the Data Dispersion window (see the
corresponding command in the next tab – Moveout curves Dispersion ).
You can select what exactly will be displayed in the visualization area on the Moveout curves tab.
In addition, you can open a separate Data Dispersion window from this tab to identify and discard
spikes in observations.
984
Observed curves – show observed travel time curves;
Dispersion – open a special Data Dispersion window which shows the points of all
observed travel time curves in the project in the “distance-time” coordinate system. The window
allows selecting points on the travel time curves. If the input data contain spikes or points that are
not relevant to the project, they can be deleted. To do this, select them in the Data Dispersion
window: pressing the left mouse button selects the area under the cursor, pressing the right mouse
button unselects it. You can change the diameter of the selection cursor using the mouse wheel.
After you are done selecting the points, press OK to close the window and then press the Delete
points button.
When the Edit button is enabled, you can edit the travel time curves – apply smoothing, change the
position of the points on the curves, and delete the points.
The Edit TT menu item must be active for the travel time curve to be editable. In this mode you
can click and drag points with the right mouse button. Select Delete point in the menu and click the
left mouse button to delete a point. You can also shift the entire curve in time by pressing
Shift+right mouse button.
985
Model menu
This tab contains commands used to create the initial velocity model. There are two types of
velocity models – constant-velocity and gradient-based.
Creation tab
The next section allows selecting the velocity model type – with a constant velocity or with a gradient
distribution.
Velocity – velocity at the top of the model. If a constant-velocity model is selected, this velocity will
apply to the entire model.
Bottom velocity – velocity at the deepest point of the model (this value is ignored for constant-
velocity models).
986
Create model – configure the geometrical parameters of the model and create the model. The
geometrical parameters of the model are defined in the Create model drop-down box:
Herglotz-Wiechert – calculate an “a priori” model from the travel time curves using the
Herglotz-Wiechert-Chibisov method.
– build an initial grid model taking into account the known data: boreholes and layers.
Allows editing the model. When the Edit model button is enabled, the model becomes
editable.
To edit the model, you need to select the cells for which the velocity values will be changed in the
visualization area.
IMPORTANT! If you try to create a new model with the Edit model command active, the current
model will be updated instead.
987
Rectangle – selects a rectangular area;
Free – allows free selection;
Columns – selects an entire column;
Rows – selects a row.
The module allows solving the forward problem for the specified model.
The Array geometry dialog box (open by pressing Edit Array) is used to define
the array geometry (unless observed travel time curves are loaded) and the
topography.
988
To configure the geometry, specify the position of the first source in the From field, the interval
between the sources in the Step field, and the number of sources in the Count field. After entering
the source positions, you need to configure the array position. The receiver array must be set up
individually for each source. To do this, check the box next to the Source, select the Receivers
table, enter the necessary parameters and press Add.
Press Load to load geometry from a file, or Save to save the geometry.
989
View tab
This tab contains commands used to configure the model display parameters.
Show confidences – show cell confidence values instead of velocities in the model display
area.
Boreholes tab
990
Objects panel – Boreholes
List of available boreholes. The Edit and Delete commands are available for the objects.
Range – diameter of the area within which the borehole data will be applied.
Layers tab
In addition to boreholes, you can add layers to the model. To do this, press New .
A dialog box will open, prompting you to configure the initial layer parameters:
991
(X, Y) – boundary points,
• Double-click the left mouse button on a layer point to delete that point.
• Double-click on an empty space to add a point.
• Click and drag a point with the right mouse button to move it.
To change the layer parameters and the points included in the menu, select it on the objects panel
under Layers and use the Edit menu command (main menu or right-click popup menu).
After the layers and boreholes are defined, the model must be updated by pressing the Build model
button in the Creation menu.
The list of available layers is displayed on the objects panel under Layers. To edit a layer, select it
from the list and click Edit in the right-click dropdown menu.
Inversion menu
After defining the model and the known boreholes, you need to configure the inversion parameters
to ensure that it will be performed correctly.
992
Inversion tab
This tab contains the main commands for running and stopping the inversion and viewing its status.
Show Status – this command displays a graph representing the functional values and the RMS
difference between the observed and theoretical travel time curves.
RMS – graph representing the RMS difference between the observed and theoretical travel time
curves in ms.
Stabilizer – graph representing the combined behavior of the differentiating smoothness functional
and the stabilizing functional for proximity of the solution to the “a priori” model.
Functional – graph representing the behavior of the total functional. The total functional is the sum
of the stabilizing functional and the RMS functional.
Inversions tree – list of current models, inversions and iterations. To run a new inversion on a
model, select the New Inversion from model command in the right-click dropdown menu. You
993
can rename, delete and create new inversions from the model. The list of completed iterations is
displayed for each inversion in this window.
Convergence tab
This tab is used to configure parameters related to convergence of the iterative process.
RMS Stop criteria (ms) – parameter used to stop the iterative process if the difference between the
theoretical and observed travel time curves is less than the specified time in ms.
RMS difference b/w iter – if the difference between two consecutive iterations is less than this
value, the solution is considered to be found.
Limitations tab
This tab allows setting the velocity limitations: minimum velocity, maximum velocity and Max
velocity change (%). The latter defines the maximum percentage by which the cell velocity may
change from iteration to iteration.
994
Confidence – contribution of the functional representing the user’s degree of confidence.
Smoothness ratio – ax/az – horizontal to vertical smoothness ratio. The higher this ratio, the
smoother the model is in the horizontal direction (horizontally layered medium), and vice versa.
Number of nodes into which each cell edge will be divided during
seismic ray tracing.
995
Surface Wave Analysis
MASW (Multichannel Analysis of Surface Waves)
This module is used to create models of S-wave velocities in the near-surface section using surface
wave analysis.
This manual is structured as follows: 1) summary of the underlying theory with description of the
survey method, 2) general procedure of working with the module – this section describes the
procedure of obtaining a S-wave velocity section, from data loading to generation of the final
model, 3) detailed description of module functions.
The surface wave vibration propagation depth is proportional to the wavelength (or inversely
proportional to the frequency). On the figure shown below there is a graphic representation of this
statement: the high-frequency wave quickly dies out and characterizes the first layer, while the
lowfrequency wave propagates deeper and provides a characteristic of the lower layers (Rix G.J.
(1988))
996
The above property of dispersion is most commonly utilized to build S-velocity profiles using
multi-channel surface wave analysis.
It is recommended to use low-frequency (4.5 Hz) vertical receivers. Use of low-frequency receivers
allows registering waves with larger wavelength, which increased the method application depth.
Higher frequency receivers may also be used. The length of the receiver line (D) is related to the
maximum wavelength (λmax), which, in turn, determines the maximum survey depth: D≈ λmax,
survey depth in this case is defines as a half of wavelength: Zmax≈ λmax/2.
On the other hand, the distance between the receivers (dx) is related to the minimum wavelength
(λmin) and, therefore, the minimum survey depth (Zmin): dx≈ λmin, Zmin≈ λmin/2
However, in practice the primary factor determining the maximum wavelength is the source.
Usually it is the first dozens of meters.
The distance between the source and the first receiver is usually 1-4dx (the method data were taken
from www.masw.com)
997
2) Dispersion analysis – building dispersion images. A dispersion image is calculated for each
seismogram. A typical dispersion image is presented in the figure below (generated using the
MASW module). The calculation procedure is described in the article by Choon B. P. et al. (1998).
The dispersion curve is extracted from the image by means of picking by amplitude maximums:
3) Final step – inversion – finding the S-velocity profile whose theoretical dispersion curve is as
close to the experimental curve as possible. Occam’s inversion is implemented in the MASW module
– the root-mean-square error between the curves is minimized while maintaining the maximum model
998
smoothness (Constable S.C. et al.(1987)). The S-velocity profile is tied to the midpoint of the receiver
spread. A two-dimensional S-wave velocity profile is built by interpolation between the obtained
vertical profiles.
Below is a detailed description of the above processing steps for the MASW module in the RadExPro
package.
When the module is added to the flow, the scheme selection dialog box opens. Create a new scheme
or select an existing one by pressing the Browse… button:
After selecting the scheme, launch the module by pressing the Run button. The main project
management window – MASW Manager – will appear. The left part of this window shows a list of
all dispersion curves that have been added to the scheme. The curves (and, therefore, Vs profiles) are
tied to the midpoint of the receiver spread and sorted in an ascending order. Source number for these
midpoints are shown in the manager as well. This window remains empty until at least one image is
processed and added:
999
Dispersion Imaging
This menu opens the dispersion image calculation and processing window:
1000
A dataset with assigned geometry is input into this function. Header fields sou_x, rec_x, offset, ffid,
and chan must be filled in for the module to operate properly. Source-receiver geometry is assigned
using standard RadExPro tools – Geometry Spreadsheet or Trace Header Math module. The dataset
can contain a single seismogram or several seismograms (for example, the entire survey). It is
recommended to combine all source points into one dataset to make working with the module easier.
To select the dataset and start the dispersion image calculation, press the Calculate from data button,
select the dataset and press OK.
The calculation process will be launched for all source points in the dataset.
1001
By default, the image is calculated in the phase velocity range of 0 to 500 ms with 1 ms steps and in
the frequency range of 0 to 70 Hz. If the image exceeds the frequency or velocity limits, change the
calculation parameters in the Calculation options function check Recalculate dispersion images
function – dispersion images will be recalculated automatically.
The status bar at the bottom of the window shows the current frequency, phase velocity, and
amplitude values.
After the dispersion image appears on the screen, the dispersion curve needs to be “extracted” by
means of picking the image by amplitude maximums. Automatic picking mode is activated by default
– points are automatically distributed between the first and the last one by amplitude maximums in
the specified window with the specified steps. The Picking parameters option is used to set the
parameters.
After the image is picked, select the next source point by pressing the arrow in the control panel, and
repeat the procedure.
When switching from one dispersion image to another, the current picking is saved even if it was not
added to the list. This way, the user can always return to any source point, view the picking created
earlier, and edit it if necessary.
1002
The calculated dispersion images together with the pickings are parts of the scheme, i.e. if the user
exits and then re-launches the project, they will be saved as of the time of exiting.
After all available images are picked, add the curves to the MASW Manager list by pressing the
Add all curves button. Window with added curves is shown below:
When at least one curve is added to the list, the Inversion button becomes active. Now we can
move on to the next step – curve fitting and model generation.
Inversion
The initial medium model needs to be specified before the curve fitting process is started. When the
Inversion button is pressed for the first time, the dialog box of the initial model filling type selection
appears:
It is recommended to select automatic model filling. This will open the parameter setup dialog box:
1003
Number of layers – number of layers in the model;
Half Space Depth – depth to the half space. Approximate estimate of the half space depth should
be made based on the maximum wavelength divided by two (Julian I. (2008)).
Automatic filling is performed as follows. The half space depth is broken down exponentially into
the specified number of layers depending on the layer filling type. The Vs velocity is assumed equal
to Vr/0.88 at each frequency and is averaged between the values occurring within one layer (Jianghai
X. (1999)). The Vp velocity is calculated based on Vs and Poisson’s ratio using the following
formula:
In this case, the same value of Poisson’s ratio and density is set for the entire model.
The following inversion parameters are also specified in the current tab:
1004
• Use confidence interval – this option determines the size of the confidence interval at each
frequency (curve picking confidence level). By default, it is disabled, and the value is equal
to one;
If this function is enabled, the value will be taken from the one calculated by the dispersion
curve (in percent of the picking point amplitude value) (see Picking parameters in the
Dispersion Imaging section).
• Chi-factor function – specifies the inversion completion threshold value. When the root-
meansquare error becomes less than the chi-factor value during curve fitting, the inversion
process is considered to have been completed successfully. By default, this value is equal to
one.
When curve fitting is performed simultaneously with Vs fitting, either the Vp value or
Poisson’s ratio changes. The next option allows specifying which value will be fixed:
1005
Absolute profile coordinates in meters are shown on the horizontal axis, and depth in meters is shown
on the vertical axis.
The receiver spread midpoints to which S-velocity models with the depths are tied are shown as
triangles, with coordinates shown above them.
Yellow-filled triangles mean that a certain model is assigned to that point (if automatic initial filling
of the layers is used, the model is assigned to all receiver points). White-filled triangles mean than no
model is assigned to that receiver point (if the user decides not to use automatic filling – this will be
discussed below). This approach to tie-in point designation gives a quick indication of whether any
work has been performed with a particular receiver point. The active receiver point whose
experimental curve is shown on the Edit model tab is marked with a dot in the center of the triangle.
After the initial model is specified, the curve fitting can be launched. The Tools→ Run inversion
function (also accessible by pressing the button on the tool bar) launches curve inversion for all
receiver points. A dialog box indicating that inversion is in progress will appear. As a result of
inversion, an updated model corresponding to the fitted curves will be shown on the screen:
1006
The fitting results for each curve can be evaluated using the Edit model option activated by
doubleclicking the receiver point triangle (or Tools→ Edit Model). This option is also a useful tool
for detailed analysis of influence of all parameters on the dispersion curve.
Below is shown the model editing window for source point No. 12. It consists of two main parts –
image of the experimental and theoretical curve, Vs and Vp models, and a table with model
parameters.
1007
The current receiver midpoint number is shown in the Receiver midpoint field. The same number is
also highlighted as active in the MASW Manager window. Root-mean-square error between the
experimental and theoretical curve and the number of curve fitting iteration are shown next to the
source point number.
The experimental curve is shown in blue, the theoretical one – in red. If necessary, changes of the
current Vs and Vp values with depth can be shown in this window (Parameters tab ):
1008
The appearance of the theoretical curve can be changed by changing the model parameters in the
table. A detailed description of the available options is provided below.
All added curves are shown in the left part of the window. To select the curves that will be used for
final model calculation, tick the check boxes next to them. If a model has already been built, the
influence of each receiver midpoint on the result can be evaluated by enabling/disabling the source
point in this list.
Dispersion Imaging – opens the dispersion image calculation and dispersion curve extraction
window;
Inversion – generation of the medium model by finding theoretical curves which are as close to the
experimental ones as possible. This button remains inactive until at least one curve is added to the
list;
1009
Pressing the Save button saves the current scheme. The Exit button is used to exit the module; if the
scheme has not been saved, the following message will be displayed
If the user chooses not to save the scheme, all unsaved data will be lost.
Dispersion Imaging
This menu opens the dispersion image calculation and processing window:
Calculate from data – selects the dataset for dispersion image calculation.
Calculation options – opens the dispersion image parameter calculation dialog box:
1010
Start V – start velocity for the calculation
Recalculate dispersion images – dispersion images will be recalculated automatically after choosing
OK button
Picking parameters – this option allows changing the following picking parameters
1011
Mode – automatic or manual picking mode. Automatic picking is enabled by default.
Confidence interval – confidence interval specified in percent of the current amplitude value at the
picking point. Used as curve definition confidence parameter during inversion (optional).
Draw confidence interval – enables display of the confidence interval in the dispersion image;
Search maximum window – window in ms within which the search for the maximum is performed
during automatic picking;
– smooth the curve using the base specified in the picking parameters;
The Display parameters option allows editing the dispersion image palette.
Add current curve – adds the current curve to the curve list in the MASW Manager window;
1012
Add all curves – adds all existing curves to the list.
If a curve with the same name already exists in the list, the system will display a replacement message:
Inversion
This button becomes active after at least one curve is added to the list.
When the Inversion button is pressed, a dialog box opens in the project window prompting the user
to select the initial model filling method – automatic or manual:
If automatic model filling is selected, the model parameter setup dialog box will appear. A detailed
description is provided in Section 2.2.
If the user decides not to use automatic model filling, an empty model window will appear; all
receiver point triangles will be white, meaning that there is no initial model assigned to them:
1013
Double-clicking the receiver point triangle opens the curve fitting window (also accessible by
selecting Parameters→Edit model). This option allows selecting model parameters for each
individual receiver point, and is also a useful tool for detailed analysis of influence of the parameters
on the dispersion curve. The initial window of the curve fitting module was shown
earlier.
If there is no initial model, it can be filled either manually or automatically by pressing the Autofill
button. Automatic filling parameters are specified in the main model window: Parameters→Model
and inversion parameters (a detailed description was provided above) or button on the tool bar.
Theoretical curve – shows the theoretical curve corresponding to the model specified in the table.
The theoretical curve corresponding to the model specified in the table is shown in red in the curve
display window; the experimental curve is shown in blue. If necessary, changes of the current Vs and
Vp values with depth can be shown in this window (Parameters tab).When the mouse cursor is
hovered over the curve window, the current velocity and frequency values are displayed in the upper
right corner.
1014
Run inversion – launches the curve fitting process using Occam’s method (the root-meansquare error
is minimized while maintaining the maximum model smoothness). Curve fitting is performed by
changing the Vs velocity (the table column with the parameter being selected is highlighted in light
blue). During this process, either Vp or Poisson’s ratio is also updated depending on the inversion
parameters.
During each iteration, the theoretical curve and the velocity change with depth (if enabled) are
updated in accordance with the current model. The inversion process stops when the number of
iterations specified in the parameters is reached or when the root-mean-square error reaches a value
below the one specified in the inversion parameters.
After the curve fitting process is completed, the user can perform a more detailed analysis of the
theoretical curve by changing other parameters in the table. To display the updated theoretical curve
after changing any of the parameters, press the Theoretical curve button.
1015
The table contains four parameters that allow changing the shape of the theoretical curve – Vs, Vp,
ρ, ν.
The Lock on edit switch performs the following function: the value of the selected parameter is
locked, and the table column becomes unavailable for editing. If one of the unlocked parameters is
changed, the other one will be updated in accordance with the current value of the locked
parameter.
Show grid – show lines perpendicular to the axes with the specified step
The Cancel button cancels all changes and closes the model editing application.
1016
Switching between source points is performed either by pressing the arrows or by selecting the
receiver point in the model window or the MASW Manager window. If the model has been changed
but has not been applied, a message prompting to apply the model is displayed:
File
Export ( button on the tool bar) – exports the model to a grd file;
Parameters
Edit model – opens the curve fitting window. A detailed description was provided above;
1017
Model and Inversion parameters ( button on the toolbar) – opens the model and inversion
parameter window (a detailed description of the parameters was provided above);
Display mode – switching between layer-by-layer and grid type of model display.
Tools
Run inversion ( button on the toolbar) – launches the inversion process for all active receiver
points;
Autofill all ( button on the toolbar) – automatically fills the model values for all receiver points
using the parameters specified on the Model and Inversion parameters tab. Previously filled
values for all receiver points are automatically rewritten!
Vertical velocity profiles are tied to the receiver spread midpoint. Values between the profiles are
interpolated.
If a new receiver midpoint is added to an already built model, interpolated values at the point where
it was added are used as the initial values for the model of that source point. The receiver point
remains white, since no model editing took place at that point.
When a receiver point is right-clicked, the Spread this model for all shotpoints option is displayed.
This option applies the current model to all receiver points.
Influence of each receiver point on the model can be evaluated by enabling/disabling receiver points
in the MASW Manager window. When a receiver point is unchecked, it will be removed from the
model and will not affect the end result.
Bibliography:
1) Choon B. P., Richard D. M., and Jianghai X. (1998) “Imaging dispersion curves of surface waves
on multi-channel record” SEG Expanded Abstracts
1018
2) Constable, S.C., Parker, R.L., and Constable, G.G. (1987). “Occam’s Inversion: A Practical
Algorithm For Generating Smooth Models From Electromagnetic Sounding Data” Geophysics,
52, 289-300.
4) Julian I., Richard D. M., Kansas Geological Survey, George T., CReSIS, The University of Kansas
(2008) “Some practical aspects of MASW analysis and processing” SAAGEEP Extended
Abstracts 2008
5) Jianghai X., Richard D.M., and Choon B.P., (1999) “Estimation of near-surface shear-wave
velocity by inversion of Rayleigh waves” Geophysics, 64, 691-700
1019
Vibroseis
Sweep Generation
The module is designed to generate a sweep signal and save it as a single trace.
Sweep Generation module work window. Panel 1 is in charge of sweep signal length and its sampling rate.
Panel 2 is in charge of signal frequency content. Panel 3 is in charge of waveform and its tapering at the
edges.
Parameters
Duration panel:
1020
Tapering panel:
• Duration – number of samples subjected to smoothing (as %) from each edge of a sweep
signal
• Type – parameter in charge of sweep waveform.
• Fade rate— sweep signal attenuation rate in case of exponential form.
ATTENTION! In case that there are traces in the flow prior to the Sweep Generation module,
they are deleted. In this regard, this module should be in the flow head.
1021
Inversion
Acoustic inversion
The "Acoustic inversion" module is designed to reconstruct the media impedance model from the
seismic data using a genetic algorithm and is based on the paper by Vardy, M.E. “Deriving shallow-
water sediment properties using post-stack acoustic impedance inversion”. The algorithm of the
module operates separately for each trace, i.e. first, the impedance model for the first trace is
generated, then for the second one, and so on. The algorithm works under the assumption of a
normally incident rays towards the surface.
Output data: a set of impedance traces, synthetic traces, PPD (probability density distribution) of
the solutions found.
The first step: the algorithm generates N random models of the impedance distribution for one trace
within the initial model defined by the user. These N models are called population.
The second step: the resulting models are convolved with the source wavelet. Let us remember that
this is possible thanks to the assumption on the perpendicularity of the incidence of the wave toward
a horizon. Further synthetic traces are compared with the original one (field trace), after which a
residual between them is determined.
1022
The third step: for each random model (Step 1), the degree of its "suitability" is determined. The
smaller the residual of the model (Step 2), the more "suitable" it is.
The fourth step determines which models will be used further to find the best solution, and which
ones are removed (by analogy with natural selection in nature). An interesting point is that you can
not pass on only the most durable traces, since this will quickly lead to minimization of the residual
functional, but it is more likely that this minimum will be local. At the same time, the most "durable"
models have more chances to qualify. There are several methods to select the traces for further
calculation. The user chooses one. The description of these methods can be found in the Module
Parameters section.
The fifth step: the selected impedance models intersect. This process is called crossing-over.
For each model obtained after step 4, a pair of the same variety of selected models is randomly chosen.
N parental pairs are created. As a result of the mathematical transformation, a crossing of the parent
models occurs. One parent pair generates two models that have the characteristics of each of the
parents.
The sixth step: to each of the obtained models random changes are introduced, which are
called "mutations". This also helps avoid minimization of the algorithm to a local minimum. The first
iteration of the algorithm is completed. The set of impedance traces obtained at step 6 is fed to the
input of step 2, from which the second iteration begins.
1023
"Acoustic inversion" module operation window divided into six panels. Wavelet - source wavelet
parameters, Layers - limitations on the model, Genetic algorithm options - parameters and constrains
of the genetic algorithm that affect how the impedance model of a trace will be selected, Output -
parameters for output and visualization of the results of the module's operation, Progress file - creating
txt reports, reflecting the optimization process for each iteration, Number of threads - the number of
threads for parallelizing the algorithm.
Parameters
1) Wavelet panel
Path – it is necessary to specify the path to the trace containing the source wavelet. If the
data set specified in the database contains more than one trace, then its first trace will be
used as a wavelet.
Zero time — a parameter that determines the zero time of the source impulse.
The "At center" value – zero time is in the center of the wavelet trace.
The "Custom" value – zero time is indicated by the user in ms.
2) Layers panel
Thickness – the minimum layer thickness in samples. Inside a layer, impedance is constant.
Impedances – impedance ranges for each layer. Values referring to different layers are
filled in divided by a colon.
• Start – the minimum possible impedance for a layer.
1024
• End – the maximum possible impedance for a layer.
• Step – the step of impedances search. When generating and changing the model, the
algorithm will use the discrete impedance values specified with this step.
• Borders (ms) – setting the boundaries of horizon layers in milliseconds. You can
specify a constant, a pick, or a header value. To do this, double-click inside the field:
1025
▪ Reflectivity series energy – component, which characterizes the energy of the
reflection coefficients. This is the sum of squares of impedance values calculated
from the impedance model, divided by the number of coefficients. The weight of a
component can be any nonnegative number 𝐾2 .
All coefficients that are not considered "bright" do not participate in the calculations.
The contribution of a component can be any nonnegative number 𝐾3 .
▪ Impedance model trend – component, which characterizes the residual between the
model and the a priori trend of the acoustic impedance change, which is set by the
user. The trend differs from the initial model (the Layers panel) in that it does not
drive the solution in certain frames, but only directs it to it with some force, which
depends on the coefficient. The contribution of a component can be any nonnegative
number 𝐾4
The trend is loaded into the module from a file that is recorded in the format of velocity
function , but instead of the velocities, the values of the impedances stand.
1026
ATTENTION! The loaded trend and seismic data are compared using the CDP header.
Therefore, you need to make sure that the traces in the flow and the loaded trend have the
correct CDP values.
• Population size — the number of models in the population. This is the number of random
models that is generated in the first step of the algorithm for each trace.
• Selection method – a parameter that is responsible for the method of forming a set of pairs
for crossing.
▪ Roulette – a method that involves random selection, which is based on the suitability
of the specimen. The greater the suitability, the more likely to be selected.
1027
unchanged parent pair passes to the next generation. If this parameter equals to 1, then the
probability that the parent models will go to the next step without crossing is 0.
• Cross-over type — the parameter responsible for the type of crossing:
▪ single point: at gene level – single-point crossing of models. In the binary code of
the impedance trace, a bit is randomly selected, after which the parent models
exchange "tails". In this case, it is possible to change the numerical value of one
sample in the crossing area (since its binary value changes)
▪ multipoint: inside each parameter – crossing occurs separately for each sample of
models according to a certain algorithm. To do this, each sample of the parent
models is represented by a binary code (this fits in 7-8 bits within the given range of
values), after which the place of the binary code break is randomly determined. After
this, two parent samples exchange halves of the binary code, forming a new pair of
samples.
1028
▪ multipoint: random parameter swaps – it is randomly determined for each sample
whether a crossing will be made for it. In case it will, the parent models exchange
values at this sample, otherwise the values remain unchanged.
• Mutation probability – the probability of model mutation. In case the parameter equals to 0,
no mutation occurs, and the unchanged model passes to the next generation. If this parameter
equals to 1, the mutation occurs with a probability of 100%.
• Maximum number of iterations – the maximum number of iterations of the genetic
algorithm.
• Objective function epsilon – possible error at minimization of the functional. If during one
of the iterations of the algorithm the value of the functional becomes smaller or equal to the
specified error, the model is considered the result of the inversion algorithm.
• Number of parallel generations – the number of sets of models (or generations), each of
which develops simultaneously and independently.
4) Output panel
• Field for output type – the header in which the trace marking at the module output is
recorded:
a. "0" means that the trace was generated according to some impedance model
b. "1" means that the trace contains an impedance model
c. "2" means that the trace contains a column from PPD (a posteriori probability
distribution)
1029
• Number of best models in each parallel generation – the number of the best models from
each parallel population that will be shown to the user. Traces of impedances are sorted in
ascending order of the residual.
Example:
Suppose that for one of the traces of seismic section, three populations of models are
developing in parallel. After the algorithm operation is over, the most "suitable" models from
each population will be selected to be shown to the user. Their total number will be equal to
the value of the Number of best models in each parallel generation parameter multiplied
by 3.
• Number of best traces in each parallel generation – the number of the best synthetic traces
from each parallel population that will be shown to the user. The synthetic traces are sorted
in ascending order of the residual.
• Field for relative residual – the header in which the relative residual of the input and output
trace is recorded.
• Field for index of parallel generation – the header contains the number of one of the parallel
populations to which the particular trace or the impedance model of the trace at the output
belongs.
• Field for impedance axis of PPD – the header in which the impedance value, corresponding
to the most probable value for the PPD is recorded.
• Path – the full path to the files. It is possible to specify a path using a mask and specifiers.
• %i – specifier, which allows you to record separate report files for each trace.
• %p – specifier, which allows you to record separate report files for each population.
1030
Special tools
Tape Loader (program of reading data from tapes)
The program is designed for copying the files from the tape to computer disk. The program reads the
files of any formats in series as they are and saves them to the indicated directory on the disk under
the given names. The program is implemented as an independent application. To run the program, go
to the directory where the RadExPro Plus package is installed; find the file TapeLoader.exe and
run it.
System requirements
Program Tape Loader works with tape reading equipment, connected through SCSI-interface, under
Win2000 and more recent operation systems. ASPI driver is not obligatory, but if it is absent the
program cannot determine the type of tape reading equipment linked to computer and will not mark
them as TAPE0, TAPE1, etc.
1031
The log is initialized and if possible the ASPI level is loaded when you run the program; the general
information on initialization process is reflected in the log. Further on the information on the running
process is written to the log, along with failures and summary of executed commands.
The main commands, essential for the program, are situated in the menu Tools. Buttons on the tools
bar duplicates the most part of them.
• Select tape… - calls the dialog on the current device replacement. (The name of the current
device is reflected in the main window header). If the ASPI level is not loaded, the program
can’t determine the number of linked devices and their names, and uses notation conventions
TAPE0, TAPE1 etc.
• Load files from tape… - command of running the data loading. As soon as you select the
command a dialog of parameter options for saving the loaded files is called.
• Path to save loaded files – path to save the loaded files from the tape.
• The button "…" allows calling the dialog of path selection, as well the path can be selected
manually.
• File name base – file name base for files on disk to which the number is added (for example,
if the file name base = "file", the loaded files will have the names "file0000", "file0001" etc.
• Select files – line that sets the file numbers to be loaded from the tape. The following spelling
is accepted:* - read all files from the tape
0, 2, 3 – read the files with indicated numbers (the files’ numbering starts from zero, i.e. the first file
on the tape is number 0, the second - 1 , etc.)
1032
0-1, 3-5- read the files with the numbers that fall within the indicated range 1,
3-4, 5,7-8 – a combination of separate numbers and ranges is allowed.
After indicating the parameters, click the button Load for running loading from the tape and writing
them on disk.
Break operation –allows breaking time consuming current operation. There are two types of
break operation: the immediate break at the time of reading, or the break after having read the current
file. A dialog window appears if you choose this command of break operation; the window allows
the user indicating the chosen break mode.
The File menu contains a unique command Exit that helps to exit from the program.
1033
Displaying wells
The wells are displayed in the Screen Display module as their projections onto seismic profile.
Before a well can be displayed, it shall be created in the project database through the Database
Navigator tab of the program main window.
To select wells that should be displayed and to adjust wells visualization in the Screen Display
module, select the Tools/Wells... menu command:
When this command is chosen the following dialog box for well visualization parameters settings
appears:
In the left part there is a list of displayed curves. To the right of the list there are buttons meant for
managing the list.
• Add well...- adds a well from the database into the list of displayed curves.
• Remove well - removes the selected well from the list of displayed curves.
• Save well list... - allows saving the whole list of displayed curves into database in order to
load them later.
• Load well list - allows loading of previously saved list of wells from database and makes it
current. When doing this all wells from the list are being checked.
• Add well list - this command is similar to the previous one. However the list loaded from the
database does not replace the current one but is added to it.
1034
In the right part of the dialog box there is a set of parameters for well displaying parameters
control.
• Treat vertical axis as Depth - if this option is activated then the vertical well axis is treated
as depth axis. Otherwise, the vertical well axis is treated as time axis.
• Show well curves - show/don't show wells.
• Show formation tops - show/don't show formation tops.
• Show well names - show/don't show well name at the top.
• Stack type - the stack type on which the wells are projected.
• Maximal distance to visualize well - sets the maximal distance from well to profile at which
the well point is still projecting onto profile and reflecting.
• Use time-depth curve if available - makes application of time-depth curve obtained from the
well a priority above the application of time-depth curve obtained from Velocity Model
Editor.
• The Log Data... button should be used to adjust the parameters for log curves displaying for
every selected well. When this button is pressed the following dialog box appears:
It is possible to display up to four log curves for every well at the same time. In the right part of the
dialog box in the Logs to plot field there are four drop-down lists of available for this well log curves.
Here you can select the curves which should be displayed on the screen. For every curve you can
specify the color by clicking the colored square to the right.
At the left side you can specify uniform display parameters for all selected log curves for this well:
• Plot area position - the displacement of log curve zero from the borehole. It should be
expressed as percentage from maximal amplitude.
1035
• Plot area width - the width of the area covered by the maximal curve amplitude. Should be
expressed in centimeters.
File formats
Formation top file description
Every line in the file is of the following kind:
A file description
The file must contain the line starting with ~A. In this line the columns names of successive table
are described.
All lines following this line must be the table rows with blanks or tabs as separators. The presence
of insignificant lines started with '#' or '\n' symbols is permissible.
Must contain the following columns: Time and Depth (if the depth is cable one) or Time nad Z(if
the depth is true one).
1036
Development tools – Creation of your own module
Microsoft Visual C++ Custom AppWizard
RadExPro is an open architecture software, so that you can create your own module and intgrate it
into the system. Detailed instructions, API and a Wizard for Microsoft Visual C++ that helps
creating new modules can be provided upon request. In case, you are interested in this ption, please,
contact us at support@radexpro.ru
1037