0% found this document useful (0 votes)
201 views8 pages

ODI 12c - Concept - Knowledge Modules

Knowledge Modules (KMs) are procedures used by Oracle Data Integrator (ODI) to perform specific tasks related to data integration like connecting to technologies, extracting and transforming data, integration, etc. There are six main types of KMs: Reverse KMs extract metadata, Loading KMs extract data from sources, Journalizing KMs track data changes, Integration KMs load data to targets, Check KMs validate data integrity, and Service KMs generate code for data services. KMs can use templates or be component-based depending on the ODI version.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
201 views8 pages

ODI 12c - Concept - Knowledge Modules

Knowledge Modules (KMs) are procedures used by Oracle Data Integrator (ODI) to perform specific tasks related to data integration like connecting to technologies, extracting and transforming data, integration, etc. There are six main types of KMs: Reverse KMs extract metadata, Loading KMs extract data from sources, Journalizing KMs track data changes, Integration KMs load data to targets, Check KMs validate data integrity, and Service KMs generate code for data services. KMs can use templates or be component-based depending on the ODI version.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Knowledge Modules

Knowledge Modules (KMs) are procedures that use templates to generate code.
Each KM is dedicated to a specialized job in the overall data integration process.
Each Knowledge Module contains the knowledge required by ODI to perform a specific set
of actions or tasks against a specific technology or set of technologies, such as connecting to
this technology, extracting data from it, transforming the data, checking it, integrating it, etc.
ODI Uses Six Different Types of Knowledge Modules

1. RKM (Reverse Knowledge Module) are used to perform a customized reverse-


engineering of data models for a specific technology. It extracts metadata from a metadata
provider to ODI repository. These are used in data models. A data model corresponds to
group of tabular data structure stored in a data server and is based on a Logical Schema
defined in the topology and contain only metadata.

A RKM follows these steps:

 Cleans up the SNP_REV_xx tables from previous executions using the


OdiReverseResetTable tool.

 Retrieves sub models, datastores, attributes, unique keys, foreign keys, conditions
from the metadata provider to SNP_REV_SUB_MODEL, SNP_REV_TABLE,
SNP_REV_COL, SNP_REV_KEY, SNP_REV_KEY_COL, SNP_REV_JOIN,
SNP_REV_JOIN_COL, SNP_REV_COND tables.

 Updates the model in the work repository by calling the OdiReverseSetMetaData tool.

2. LKM (Loading Knowledge Module) are used to extract data from heterogeneous source
systems (files, middleware, databases, etc.) to a staging area. These are used in
Interfaces. An interface consists of a set of rules that define the loading of a datastore or a
temporary target structure from one or more source datastores.

 The LKM creates the "C$" temporary table in the staging area. This table will hold
records loaded from the source server.

 The LKM obtains a set of pre-transformed records from the source server by
executing the appropriate transformations on the source. For SQL-type LKMs, this is
done by a single SQL SELECT query when the source server is an RDBMS. When
the source doesn't have SQL capacities (such as flat files or applications), the LKM
simply reads the source data with the appropriate method (read file or execute API).

 The LKM loads the records into the "C$" table of the staging area.

3. JKM (Journalizing Knowledge Modules) are used to create a journal of data


modifications (insert, update and delete) of the source databases to keep track of changes.
These are used in data models and used for Changed Data Capture.

JKMs create the infrastructure for Change Data Capture on a model, a sub model or a
datastore. JKMs are not used in mappings, but rather within a model to define how the
CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a
table of changes, views on this table and one or more triggers or log capture programs as
illustrated below.
4. IKM (Integration Knowledge Module) are used to integrate (load) data from staging to
target tables.

The IKM is, in charge of writing the final, transformed data to the target tables. Every
mapping uses a single IKM, for each target that is to be loaded. When the IKM is started,
it assumes that all loading phases for the remote servers have already carried out their
tasks. This means that all remote source data sets have been loaded by LKMs into "C$"
temporary tables in the staging area, or the source datastores are on the same data server
as the staging area.

When the staging area is on the target server, the IKM usually follows these steps:
 The IKM executes a single set-oriented SELECT statement to carry out staging area
and target declarative rules on all "C$" tables and local tables (such as D in the
figure). This generates a result set.

 Simple "append" IKMs directly write this result set into the target table. More
complex IKMs create an "I$" table to store this result set.

 If the data flow needs to be checked against target constraints, the IKM calls a CKM
to isolate erroneous records and cleanse the "I$" table.

 The IKM writes records from the "I$" table or the result set to the target following the
defined strategy (incremental update, slowly changing dimension, etc.).

 The IKM drops the "I$" temporary table.

 Optionally, the IKM can call the CKM again to check the consistency of the target
datastore.

When the staging area is different from the target server, as shown in above figure, the IKM
usually follows these steps:

 The IKM executes a single set-oriented SELECT statement to carry out declarative
rules on all "C$" tables and tables located on the source or staging area (such as D in
the figure). This generates a result set.

 The IKM loads this result set into the target datastore, following the defined strategy
(append or incremental update).

5. CKM (Check Knowledge Module)

The CKM is in charge of checking that records of a data set are consistent with defined
constraints. The CKM is used to maintain data integrity and participates in the overall data
quality initiative. The CKM can be used in 2 ways:
 To check the consistency of existing data. This can be done on any datastore or within
mappings, by setting the STATIC_CONTROL option to "Yes". In the first case, the
data checked is the data currently in the datastore. In the second case, data in the
target datastore is checked after it is loaded.

 To check consistency of the incoming data before loading the records to a target
datastore. This is done by using the FLOW_CONTROL option. In this case, the CKM
simulates the constraints of the target datastore on the resulting flow prior to writing
to the target.

In summary: the CKM can check either an existing table or the temporary "I$" table created
by an IKM.

The CKM accepts a set of constraints and the name of the table to check. It creates an "E$"
error table which it writes all the rejected records to. The CKM can also remove the
erroneous records from the checked result set.

The following figures show how a CKM operates in both STATIC_CONTROL and
FLOW_CONTROL modes.

In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them
against the data of the table. Records that don't match the constraints are written to the "E$"
error table in the staging area.
In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the
Mapping. It checks these constraints against the data contained in the "I$" flow table of the
staging area. Records that violate these constraints are written to the "E$" table of the staging
area.

In both cases, a CKM usually performs the following tasks:

1. Create the "E$" error table on the staging area. The error table should contain the
same columns as the attributes in the datastore as well as additional columns to trace
error messages, check origin, check date etc.

2. Isolate the erroneous records in the "E$" table for each primary key, alternate key,
foreign key, condition, mandatory column that needs to be checked.

3. If required, remove erroneous records from the table that has been checked.

6. SKM (Service Knowledge Module) are used to generate code required for data services.
These are used in data models. Data Services are specialized web services that enable
access to application data in datastores, and to the changes captured for these datastores
using Changed Data Capture.
Styles of Knowledge Modules

In ODI, there are two KM styles: template-style, and component-style.

Template-style KMs are available in both ODI 11g and ODI 12c.

Component-style KMs are available in ODI 12c only.

A LKM is either a template-style KM or a component-style KM.

Template-style KMs must be imported from an ODI directory called “C:\Oracle\Middleware\


Oracle_Home\odi\sdk\xml-reference” into an ODI repository.

Component-style KMs are automatically installed in ODI when an ODI repository is created.
By default, ODI 12c uses component-style LKMs when a LKM is required, unless ODI users
choose to import and use template-style LKMs.

In ODI 12c, when a mapping is created and a LKM is required, ODI automatically assigns a
component-style LKM to the mapping. If a template-style LKM has been already imported
into an ODI project, then the template-style LKM is used instead.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy