Advanced Analytics Platform Technical Guide R19
Advanced Analytics Platform Technical Guide R19
Guide
Table of Contents
Disclaimer.................................................................................................................................. 9
Introduction ............................................................................................................................ 10
Intended Audience .................................................................................................................. 11
Application Overview .............................................................................................................. 12
Application Architecture.......................................................................................................... 14
Advanced Analytics Data Flow ........................................................................................................ 14
Data ExStore ................................................................................................................................. 16
Core Analytics ETL Flow ................................................................................................................. 18
Extract .......................................................................................................................................... 19
Transform ..................................................................................................................................... 20
Load ............................................................................................................................................. 20
Analytics in Azure PaaS .................................................................................................................. 21
InsightImport ......................................................................................................................... 23
Overview ....................................................................................................................................... 23
Specific Features / Functions .......................................................................................................... 24
Technical Details ........................................................................................................................... 25
Architecture .................................................................................................................................. 25
Technical Components ................................................................................................................... 25
Additional Information ................................................................................................................... 69
Configuration................................................................................................................................. 74
Configuring Source Schema (Add/Remove/Configure Tables) ........................................................... 74
Running the procedures / Configuring the Analytics ETL Job ............................................................ 75
Logging......................................................................................................................................... 75
InsightLanding ........................................................................................................................ 76
Overview ....................................................................................................................................... 76
Specific Features / Functions .......................................................................................................... 77
Technical Details ........................................................................................................................... 78
Architecture .................................................................................................................................. 78
Technical Components ................................................................................................................... 79
Configuration................................................................................................................................. 97
Configuring ExtractList ................................................................................................................... 97
InsightSource .......................................................................................................................... 98
Overview ....................................................................................................................................... 98
Specific Features / Functions .......................................................................................................... 98
Technical Details ........................................................................................................................... 99
Architecture .................................................................................................................................. 99
Page 2 | 335
Advanced Analytics Platform Technical Guide
Page 3 | 335
Advanced Analytics Platform Technical Guide
Page 4 | 335
Advanced Analytics Platform Technical Guide
Page 5 | 335
Advanced Analytics Platform Technical Guide
Page 6 | 335
Advanced Analytics Platform Technical Guide
Page 7 | 335
Advanced Analytics Platform Technical Guide
Page 8 | 335
Advanced Analytics Platform Technical Guide
Disclaimer
THIS IS TEMENOS PROPRIETARY AND CONFIDENTIAL INFORMATION AND SHALL NOT BE DISCLOSED
TO ANY THIRD PARTY WITHOUT TEMENOS’ PRIOR WRITTEN CONSENT.
TEMENOS IS PROVIDING THIS DOCUMENT "AS-IS" AND NO SPECIFIC RESULTS FROM ITS USE ARE
ASSURED OR GUARANTEED. THERE ARE NO WARRANTIES OF ANY KIND, WHETHER EXPRESS OR
IMPLIED, WITH RESPECT TO THIS DOCUMENT, INCLUDING, WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OR CONDITIONS OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE AND
NON-INFRINGEMENT, AND TEMENOS EXPRESSLY DISCLAIMS ANY SUCH WARRANTIES AND
CONDITIONS.
Page 9 | 335
Advanced Analytics Platform Technical Guide
Introduction
Temenos Analytics provides banking specific analytical solutions that improve business decisions, optimize
performance and enrich customer interaction. Financial institutions can transform their organizations to be
analytically driven with pre-built models, KPIs, dashboards, applications and reports, coupled with real-time
data and predictive analytics, allowing them to compete in the digital world. Analytics can be embedded
directly into core banking, channels, CRM and other solutions empowering people to make smarter
decisions and work more efficiently.
Page 10 | 335
Advanced Analytics Platform Technical Guide
Intended Audience
This document is intended for a technical audience and is meant to provide an understanding of how the
Advanced Analytics Platform works, how it is structured, and how to use and configure it.
Users consulting this technical guide will require a working knowledge of MS SQL Server Management
Studio, MS Structured Query Language and T-SQL Script.
Page 11 | 335
Advanced Analytics Platform Technical Guide
Application Overview
The Advanced Analytics Platform is a multi-database business intelligence platform for analytical reporting.
The data warehouse data model is designed specifically for financial institution analytical reporting and ad
hoc data analysis. It supports and integrates multiple data sources and is core banking system agnostic,
meaning it can be configured to work with different core banking systems.
The multiple databases in Advanced Analytics all serve a particular function in the overall data warehousing
process and those features are listed in the table below, together with an overview of the Analytics ETL
(Extract Transform Load) process.
Page 12 | 335
Advanced Analytics Platform Technical Guide
Page 13 | 335
Advanced Analytics Platform Technical Guide
Application Architecture
Advanced Analytics Data Flow
In Figure 1, we can see the Advanced Analytics Data flow diagram. The starting point for the data flow
within this platform is our set of data sources. As previously explained, Analytics is a multi-source framework
compatible with a variety of source systems. These source systems feed data to the Advanced Analytics
Platform through the Analytics ETL process.
In the example below, Temenos T24 Core Banking represents the main banking source system –data is
normally extracted from Core Banking during the close of business (even though the extraction job can
also run as a standalone service) and saved into CSV files through an application called DW.EXPORT, which
is part of the Data Warehouse module. From R18, DW.EXPORT can also run in online mode, ensuring that
the Analytics Platform is updated in a near real-time manner.
Core banking data is then imported to the Advanced Analytics Platform using Data ExStore. Data ExStore
is a cost-effective and simple-to-implement solution for banks to extract Core banking data and transform
it into a relational database format. Data ExStore consists of a dedicated SQL database, called
InsightImport, and a set of stored procedures which will load Core banking data, look after data typing,
data parsing, data profiling and finally import new data to InsightLanding, the entry point for any source
system in the platform. If near real-time updates are enabled, InsightImport will contain two separated
sets of tables i.e. a relational replica of the latest set of CSV files, extracted by DW.EXPORT from Temenos
Core Banking in a batch mode, and a set of temporary online files storing intra-day updates.
InsightLanding is the platform’s Operational Data Store and archives source data in source format
potentially for any ETL date ever run. Thanks to a series of views it is possible to query InsightLanding for
data archived during different ETL dates. InsightLanding is quite an important source of information that
can directly provide data sets for reports. From R19, InsightLanding will also store a copy of the online
tables in InsightImport that can be used for intra-day reporting.
The integration of incoming data from multiple source systems is done in the InsightSource database,
which contains only one day of data collected from InsightLanding. If any of the optional products are
available, InsightSource also gathers data from Customer Profitability and Predictive Analytics. These
optional modules can be used – during this stage of Analytics ETL – to perform additional calculations which
are then written back to InsightSource – e.g., Customer Profitability will calculate a parameter called Net
Income, which indicates what is the profitability of a specific customer over a certain time horizon (i.e. on
a Monthly or Yearly basis). Predictive Analytics, instead, can provide calculations on other more complex
predictive parameters like attrition risk or lifetime customer value. In addition to this, InsightSource can
even combine transactions from multiple or different dates. For instance, you may have a banking source
system from which we extract data on a daily basis and a Budget source system whose data is only made
available once a year. In InsightSource these two source systems will be integrated even though the data
coming from the two bears different extraction dates. This database still stores data in the source system
format and the actual transformation of this data from sources to target tables, which is the very core of
the ETL process, is performed within the InsightStaging database.
Staging extracts data from InsightSource and transforms it according to a series of source views, which
define how to map source data into the target tables stored into the InsightWarehouse database, our
platform’s data vault. Also, during ETL, InsightStaging applies on the data the business rules defined within
the InsightETL database – these business data rules allow, for instance, to group source data into new
categories, carry out calculations to populate new columns and create new tables altogether in the
InsightStaging, Source or Landing databases. Business rules can also alter the structure of abstraction
views in the InsightWarehouse database. Business rules can be designed and amended in InsightETL
directly using the Analytics Web Application.
Page 14 | 335
Advanced Analytics Platform Technical Guide
Once this process is completed, the transformed data is loaded in InsightWarehouse and, finally, from
InsightWarehouse to the Analytics Cubes or, in some cases, the SSAS Tabular Models. InsightWarehouse,
InsightLanding, and the Cubes will be the core sources of information for the quick reports, custom reports,
analyses reports, Power BI reports, KPI dashboards and Information Tiles accessible from the Presentation
Layer.
Some of the databases shown in the Dataflow Diagram below are important for the configuration and the
execution of ETL but are not used as data source for reports and other Analytics Front End contents. To
start with, the aforementioned InsightETL database stores performance booster features (e.g. multi-
threading), data quality services configuration and logging for the Analytics ETL process in addition to the
already discussed Rules Design facilities. Also, the InsightSystem database hosts configuration and
parameter tables used by the Analytics Web Application – and in the same way, InsightPricing and
PredictivePlatform databases will as well contain configuration and parameter tables used by the Customer
Profitability and Predictive Analytics modules, respectively.
Page 15 | 335
Advanced Analytics Platform Technical Guide
Data ExStore
Data ExStore is a cost-effective and simple-to-implement solution for banks to extract and parse Core
banking data and transform it into a relational database format. Data ExStore is a packaged solution for
banks to extract, to transform and to store their Temenos core banking data into a relational database
format.
Data ExStore is based on Microsoft SQL Server and works with Core Banking/T24 R08 and above, on any
platform – Oracle, Microsoft SQLServer, DB2, & jBase. It provides any Temenos Core banking client with
an easy way to extract data from Core Banking for reporting using Temenos Analytics, SQL Server Reporting
Services or any other reporting tool compatible with SQL Server.
DW.EXPORT
DW Export is a Temenos Core banking application which exports data from Core banking into
CSV/text files that can be consumed by downstream Analytics processes. The data extracts are
not properly data typed or parsed but it is all in text format (dates, balances, etc.). DW.Export
is executed as part of the overnight close of business process to perform bulk data extraction.
This batch process can be also executed in an online mode for intra-day updates.
Out-of-the-box, DW.EXPORT is configured to extract a set of tables from Temenos core banking.
If any additional table is required by the client, DW.Export applications can be configured to
include those tables.
The extracted DW.Export text files still contain complex Core Banking data with multi-value and
local reference fields. This process automatically parses the data into simple, readable format
which can be used by any downstream bank process or reporting engine.
Page 16 | 335
Advanced Analytics Platform Technical Guide
The cleansed batch data from Temenos Core Banking will be loaded in a set of SQL Server tables
with a “dbo” schema e.g. dbo.CUSTOMER. If online processing is enabled, in addition to these
standard “dbo” tables, InsightImport can also store a set of tables with “Online” schema that
store intra-day updates in quasi-real time e.g. Online.CUSTOMER. Database administrators can
define whether individual tables should be subject to online processing or not through the
SourceSchema configuration table (that will be illustrated in the InsightImport chapter).
If online processing is enabled for a table, first data load will always be in batch mode with data
being extracted from CSV files, data typed, data profiled, subjected to data quality and multi-
value, sub-value and local reference parsing then loaded to “dbo” tables as in standard batch
processing. Once the first load is completed, the online processing will pull data from the source
system after a set time interval and populate the “Online” schema tables with any intra-day
updates. New entries in the “Online” tables will be then subjected to data quality and copied to
the standard “dbo” tables. Finally, the whole content of the updated “dbo” table will be subject
to multi-value and sub-value parsing and copied to InsightLanding.
• InsightLanding
InsightLanding is effectively a relational copy of Temenos Core banking in a SQL Server format
and can be used for reporting purposes. By default, tables in this database are updated in batch
mode when Analytics ETL runs. InsightLanding will automatically contain all data from Temenos
Core banking which is extracted and parsed using InsightImport. This includes all module data
AA, LD, etc. which have been extracted. These copies of data are kept in the same physical
database and separated by source table name with source system schema. E.g. if the BS
(Banking System) source system has a CUSTOMER table, there will be one BS.CUSTOMER table
in InsightLanding that stores a copy of all CUSTOMER records loaded into InsightLanding for
each business date processed during Analytics ETL. By default, all days of data are stored in one
individual table in Columnstore format and users can filter records loaded on a specific date using
the MIS_DATE column as a filter1.
As in InsightImport, the database administrator can enable online extraction for specific tables
also in InsightLanding. This is done through an online flag that can be enabled or disabled in the
ExtractList configuration table (as explained in the InsightLanding chapter, this table will define
the extraction features for each and every table stored in InsightLanding). If online processing
is enabled for a table, the first processing will be in batch mode and populate the standard table
with source system-specific schema e.g. BS.CUSTOMER. Any intra-day update thereafter will be
stored in a dedicated table with “Online” schema. E.g. Online.CUSTOMER. It must be noted that,
on the one hand, when online processing is enabled, BS.CUSTOMER in InsightLanding will remain
the same until the next ETL, while dbo.CUSTOMER in InsightImport is constantly updated. On
the other hand, if online processing is enabled for the CUSTOMER table in InsightLanding, the
latest copy of the dbo.CUSTOMER table in InsightImport (that will include intra-day updates),
will be stored in the Online.CUSTOMER table in InsightLanding after a set time interval. The
online processing for InsightImport and InsightLanding is shown in Figure 3.
InsightLanding allows for data archiving and for reporting to be produced on different days of
data and on intra-day data. This archived data can also satisfy different bank audit requirements
1 If a bank wishes to use row-base format tables in InsightLanding, instead, there will be a CUSTOMER
table for each day of data that has been extracted and the business date will be part of the table schema
e.g. 20190101BS.CUSTOMER.
Page 17 | 335
Advanced Analytics Platform Technical Guide
regarding the archiving of data. All data types will be accurate and local reference and multi-
value fields will be parsed. This point-in-time data extract that be used for reporting, building
extracts, or building interfaces to a bank’s data warehouse. Reports are built upon two types of
InsightLanding views i.e. batch views with source system-related schemas (e.g. BS.v_Customer)
and online views with online schemas (e.g. Online.v_Customer). While the former are based only
on batch InsightLanding tables such as BS.CUSTOMER, the latter will query a full set of all Extract
List tables (including Batch only) and will be a union of online deltas and previous business date
batch. More information will be provided in a dedicated InsightLanding chapter.
Data ExStore does not include Business Content i.e. it does not contain any dashboards, BI applications,
and OLAP cubes; it is simply a data extraction (and transformation) solution with reporting based on Core
Banking tables in a relational format. Business Contents will be available in the Analytics Web Application
and its corresponding InsightSystem database and within the SSAS Analytics Cubes or Tabular servers.
Also, Data ExStore does not include a facility to extract data from additional Data Sources – Data ExStore
only works with Core banking/T24 data (R08 and above). Back-patches can be made with R06 and above
but must be reviewed by the Insight Product team.
Page 18 | 335
Advanced Analytics Platform Technical Guide
High-level progress and timing of Analytics ETL are logged in the StagingEventLog table while more
detailed information is available in the StagingEventLogDetails table, both of which are located in the
InsightETL database.
Extract
The core extraction phase takes place when data is imported from the InsightSource database to
InsightStaging. Data coming from source systems’ columns is here mapped against the Data warehouse
columns through the so-called v_source views. V_source views can join multiple tables from
InsightSource, rename their columns and apply simple transformations e.g. CONCAT.
Page 19 | 335
Advanced Analytics Platform Technical Guide
Different source systems in InsightSource will have separated sets of v_source views. However, separated
tables from the same source system can be combined into a single table. The outcome of the mapping
performed by v_source views will be located in temporary tables called source tables.
For example, the v_sourceAccountBS_AALending will use the AA_ARRANGEMENT table from InsightSource
as its main source table, but this will be joined as well with ACCOUNT, DATE, COMPANY, AA_PRODUCT
etc. and columns from all these joined tables can be extracted, relabeled and used.
The data extracted from InsightSource view will be materialized, within the InsightStaging database, in the
sourceAccountBS_AALending temporary table based on the content of v_sourceAccountBS_AALending. The
data types of the columns materialized in the “source” tables in InsightStaging and any additional column
that is not present in the v_source view will be read from the DataDictionary table in InsightWarehouse
(see Load section of this chapter).
From a technical point of view, data is materialized in a source temporary table when the corresponding s_
* Extract stored procedure is executed (e.g. s_FactAccount_Extract, s_DimAccount_Extract etc.). As
previously mentioned, each Dim or Fact table in the Warehouse will have their corresponding
s_Fact*_Extract or s_Dim*_Extract stored procedure. This same stored procedure will also read Data
Dictionary to obtain the correct data type for each column in the “source” table and apply the conversion.
Transform
In the Transform phase, more calculations will be applied to the data materialized in the temporary “source”
tables of InsightStaging – these calculations are defined through Data Manager’s business rules for the
InsightStaging database and with execution phase set to “Transform”.
In addition to this, the Transform phase of Analytics ETL also extract any required values from the
databases of any optional product installed and applies them to InsightStaging tables – for example, here
is when Customer and Account Monthly and Yearly Net Income are imported in InsightStaging if the
Customer Profitability product is installed; likewise, data can be imported from the databases of the
Predictive Analytics module, from Enterprise Risk Management etc.
Then, the transformed data from the “source” tables is copied to a set of “staging” temporary tables and
the corresponding Dim and Fact tables are created but left empty (e.g. the modified content of
sourceAccount is moved to staging Account and empty DimAccount and Fact Account tables are created
during the Transform phase). These temporary tables will be then populated and finally loaded into the
InsightWarehouse database during the Load phase.
Technically, an s_*Transform stored procedure for each Dim or Fact table exists and will execute what
described above (e.g. s_FactAccount_Transform, s_DimAccount_Transform etc.).
Load
The list of dimension and fact columns to be loaded in the InsightWarehouse database in controlled by the
DataDictionary table (and by the corresponding v_DataDictionary view) residing in this database. This table
also defines whether each dimension should be treated as type 1 or type 2.
The only columns whose content is not defined through mapping or calculations based on InsightStaging
data are surrogate keys (e.g. CustomerId, AccountId etc.) which are added during the Load part of ETL.
Surrogate keys are written directly in InsightWarehouse while the remaining columns are copied from the
temporary Dim and Fact tables in InsightStaging to corresponding Dim and Fact tables in
Page 20 | 335
Advanced Analytics Platform Technical Guide
InsightWarehouse. If an error in encountered during load, the current transaction (per table) is rolled back,
the Analytics ETL process is interrupted and it reports an error.
Users will only be able to see the newly loaded data in reports once the process has completed successfully
for each and every table and the new business dated is enabled, at the end of the Analytics ETL process.
The whole process described above is executed by a set of sub-steps called s_*Load – yet again a stored
procedure will exist for each and every Dim or Fact table (e.g. s_FactAccount_Load, s_DimAccount_Load
etc.).
2 https://azure.microsoft.com/en-us/overview/what-is-paas/
Page 21 | 335
Advanced Analytics Platform Technical Guide
When PaaS is used, the Advanced Analytics platforms will preserve their full features set and code re-work
will be minimized. The advantages of this kind of deployment for the bank include more elastic scalability,
the possibility to leverage any built in high availability and disaster recovery functionalities and the ability
to no longer maintain support infrastructure for IT infrastructures.
Page 22 | 335
Advanced Analytics Platform Technical Guide
InsightImport
Overview
Advanced Analytics has a rich data warehouse which accepts data from multiple data sources and is
architected to be core banking system agnostic. However, as explained in the previous chapter, there is a
specific functionality built into it to deal explicitly with importing and parsing data from the Temenos core
banking system. This functionality exists in the InsightImport database and this database and the
procedures within it are only relevant for importing data from Temenos Core banking and not from other
core banking systems.
As previously mentioned, Temenos Core banking can export data using an Analytics-specific application
called DW.Export which exports that data to CSV files (batch export) or directly to the Analytics Platform
(online export). When CSV are used, a log of all the exported CSV files is provided in a CSV file called
HASH_TOTAL. See the DW.Export User Guide for a detailed explanation of this application and its features.
This CSV data coming from DW.Export is still in the Temenos Core banking data structure, meaning it still
has multi-values and sub-values.
The InsightLanding database requires data in a relational database format with properly typed data.
InsightImport bridges that gap by importing the CSV files, data typing each column, and parsing the local
reference, multi-value, and sub-value columns (for more information and examples about how multi-values,
sub-values and local reference fields are parsed in InsightImport and for more detailed information about
the R19 enhancements to this feature, please refer to the Additional Information section of this chapter).
Sub-table parsing relies on fine granularity in multithreading at the row level that improves efficiency.
Furthermore, the sub-table parser has been rewritten to work with Data Quality to multithread rows in
memory. The end result is to create a properly data-typed relational format for import into InsightLanding.
ETL batch control, multithreading configuration and all logging concerning data processing in this database
is stored in dedicated tables within the InsightETL database.
Page 23 | 335
Advanced Analytics Platform Technical Guide
Feature Description
Import Data from CSV The import procedure connects to CSV files which are exported by
DW.Export and imports those files into the InsightImport database.
Data Typing Data from the CSV files of DW.Export is all considered text by
default. InsightImport provides a way to automatically type the data
being imported to be dates, integers, etc. in order to have efficient
data storage.
Data Parsing Temenos Core banking is a highly flexible system which allows
entire tables to be stored within a single column of a row from the
parent table. These are Local Reference, Multi-Value and Sub-Value
fields. InsightImport provides a mechanism to parse these cells out
into additional tables which are joined to their parent table on a
unique key.
Data Profiling A data profile is created which checks if the source system dictated
data types are correct and records a newly calculated data type if
the source data type is not adequate. The data profile includes Min
Value, Max Value, Length, the string with the maximum length, Not
Null, Is Numeric, Is Date etc.
User Data Type Tables created in InsightImport are created from a Data Dictionary.
Override The user can override a source system data type with another
datatype if required.
Object Renaming InsightImport allows you to rename the objects being imported at
the table and column level. If a table has been renamed in Temenos
Core banking but its structure hasn’t changed, this feature can be
used to rename the table so the mapping further downstream in
InsightStaging does not have to be modified and can retain logic
using the original table name. This is quite useful for CRF report
extracts which are frequently renamed.
Skip Columns InsightImport allows you to skip importing certain columns as they
may not be required. This allows you to store less data or ignore
columns that you know to contain bad data.
Data Quality Service InsightImport allows you to ensure that the Analytics ETL and
Process Data ExStore job do not fail if the import process encounters
non-critical bad data
GDPR Compliance InsightImport allows you to import from Temenos Core Banking the
metadata required to manage consent and data erasure according
to GDPR legislation
Page 24 | 335
Advanced Analytics Platform Technical Guide
Technical Details
Architecture
InsightImport is the second step in a Temenos Core banking implementation of the Advanced Analytics
platform, with the first step being DW.EXPORT extraction. It accepts the CSV files exported from Core
banking or online updates, imports them into a SQL database, assigns data types and parses the data into
a relation format to be consumed by InsightLanding.
Technical Components
Because each Temenos Core banking implementation will have different modules and local customizations,
the tables sent to the Analytics platform may also be different between installations. There may be
additional tables and/or additional columns required. A combination of locally configured values and data
Page 25 | 335
Advanced Analytics Platform Technical Guide
from DW.Export imported tables will let the process know which tables to import and which columns are
local-refs or multi-value.
For example, the SourceSchema table is loaded with the relevant table names to import, and it is
configured manually via the Local TFS layer. SourceSchema is an Analytics-specific configuration table
which contains the list of tables that should be created in InsightImport as replicas of the CSV files imported
from Core banking. This table also includes a set of flags that identify whether ETL should parse local
reference, multi-value or sub-value fields within an imported Core Banking table (i.e. T24ExtractLocalRef,
T24MultiValueAssociation and T24SubValueAssociation, respectively). SourceSchema also includes flags
that specify whether DW Online Process and Online Process should be executed on a table or not.
The first time the Analytics ETL process runs, it will generate a data model consisting of a table
(Insight.Entities) containing all the tables to be created including sub-tables (Local Refs, multi-values
etc.) and a table (Insight.Attributes) which contains all the columns to be created.
The Insight.Entities and Insight.Attributes tables are populated from the following metadata sources
– SourceSchema table, Standard_Selection table Insight.SourceColumns table (which is populated
from the top line of each CSV file), Local_Table table and Local_Ref_Table.
S_ImportBaseTables and s_ImportSubTables are the stored procedures that control the processes
to create tables from CSV and from LocalRef/Multi-value/Sub-Values. They are supported by a number of
helper stored procedures that will be discussed as we proceed.
The general idea behind the process is that on first load the data is loaded into tables created with
nvarchar(Max) columns, the data is then data profiled and the correct data type is determined – this process
is called Data Profiler Import (DPI). Once the data profiling process is completed, data tables are created
with the correct data type and then loaded with data. Furthermore, an entry for each column of each data
table imported from Core banking and the associated data types is stored in the Insight.ImportSchema
table.
On subsequent runs, the data profiling import process will not be repeated for existing tables and these
tables will be loaded in InsightImport making use of the data profile information defined in
Insight.ImportSchema during the first Analytics ETL and only importing the content those columns that
were previously profiled, ignoring any new columns.
If nothing changes in the metadata, this speeds up the process and minimizes the risk to Analytics ETL
failure as the data is loaded directly into the existing typed table and data profiling is not done again. If
something does change during subsequent ETL processes, like a column size in Standard_Selection, we
will need to manually edit the appropriate configuration table in InsightImport to ensure that the process
re-runs the data profiling step and the required tables are recreated.To be specific, if a new table is created,
we will need to enter a definition for it in Source Schema. If new columns are added to an existing table,
instead, all column entries related to the existing table will have to be deleted from Insight.ImportSchema–
this will trigger data profiling to be re-done for all the columns in the table (including the new one) during
the next Analytics ETL is execution.
Data Quality
From R18, a new Data Quality Import (also known as DQI) has been included in the Data ExStrore stage
of Analytics ETL. The main design goal for DQI is to achieve a relatively stable table structure for import.
Having such stability, we now have chance to detect the corrupted data and safely bring daily data with
expected schema into InsightLanding with Columnstore Index technology.
Page 26 | 335
Advanced Analytics Platform Technical Guide
This Import stage, in fact, could be interrupted due to some corrupted fields in the Temenos Core Banking
CSV file or to any unexpected new text suddenly not being compatible anymore with the strongly-typed
target column. Often, it is a trivial column that terminates the whole ETL, even though an immediately
accurate value for this column is not urgently required and a correction can be applied later, provided of
course that this bad data can be spotted and cleansed during the importing phase then red-flagged to all
downstream consumers. This is the main reason we need a Data Quality component.
Secondly, the column schemas of the columnstore indexes in the new ODS should be kept relatively stable
and not subject to frequent adjustments – this feature will be discussed in the chapter dedicated to
InsightLanding. Maintaining a set of fixed column data types as long as possible in the ODS tables is key
to achieving both the super high column-wise data compression and optimal performance. It is now
necessary to have a regulator on data quality to constrain the data types casted for the CSV fields so the
source CSVs always conform to the existing target SQL tables in order to avoid the penalty of randomly
rebuilding huge archival tables.
In general, in the event of encountering corrupted values, instead of terminating the ETL process, the show
now goes on and the bad data will be dealt with later. But before that, the Data Quality Component marks
the affected area, repairs them according to predefined rules, and provides all other subsequent handlings
with the necessary logging. Configuration tables for parameterizing DQI repairs and logging are stored in
the InsightETL database and will be discussed in a dedicated chapter in this document.
In the following image we can see a diagram of the new data import process flow with Data Quality
Services.
Figure 7 - New data import process flow with Data Quality Services
Page 27 | 335
Advanced Analytics Platform Technical Guide
Furthermore, when data quality import is executed, three new columns will be added to each and every
InsightImport base or ‘sub’ table affected (e.g. BNK_CUSTMER, BNK_ACCOUNT, BNK_ACCOUNT_LocalRef
etc.) to host detailed data quality results about the row processed – these new DQ-related columns are
ETL_DQ_RevisionCount, ETL_DQ_ColumnsRevised and ETL_DQ_ErrorMessage.
These ETL_DQ system columns are permanent and, if any data quality issue is encountered in a table thet
will be added to it starting from the second execution of the Analytics ETL when Data Quality becomes in
effect (as previously illustrated, the very first ETL execution uses Data Profiler only). Therefore, not only
users do not need to manually create these system columns to start with but also they should avoid
modifying them – even the position in the table of data quality system columns should not be changed. 3
The new columns have the following features:
A Null value in this attribute means the row did not get DQI checked since the T-SQL bulk insert statement
has successfully imported it. The value ‘0’ means that DQI checked the row and no issue found. Any positive
integer indicates number of DQI issues encountered – the details about these issues will be further
illustrated in other two data qualitity system fields.
User can either browse the resulting JSON string in a plain view or utilize the fn_GetJsonErrMsg function in
InsightETL to output as table style. In the rare scenario in which the total length of the original JSON string
3 The DQI process makes sure they are always appended at rightmost of each table to achieve optimized
ordinal mapping. This guarantees the best processing performance and changes to this initial setting are
therefore strongly discouraged.
Page 28 | 335
Advanced Analytics Platform Technical Guide
exceeds 4000 characters, DQI will truncate it at 4000 characters and replace the trailing characters with 3
dots. An example of palin view is provided in the image below.
To further adjust DQI’s behavior, the ‘DQAlwaysOn’ flag has been added to SouceSchema – as we will see
in the dedicated section, this flag can allow Analyitics ETL to skip the Data Profiler Import and immediately
appli DQI from the first execution.
In addition to this, stored procedures and functions in charge of processing Temenos Core Banking data in
InsightImport have been rewritten or added ex novo to accommodate the new DQI process. New DQI-
related procedures and functions have also been created into the InsightETL database. Technically, the
core programming of DQI is implemented as a standard C# DLL library, wrapped within a SQL-CLR
assembly and called by T-SQL stored procedures. The core DLL executes DQ logic in a paralleling high-
speed streaming fashion, also utilizes multithreaded firehose-read and bulk-write from start to finish. In
general, memory consumption is low and stable while CPU and disk utilization is high and efficient.
DQI Rules
All the Data Quality used to define whether data table’ columns are correctly data typed and, hence, to
orchestrate the DQI process are stored in the RuleDataQuality table of InsightETL. More information about
this table will be provided in the dedicated chapter and section.
Page 29 | 335
Advanced Analytics Platform Technical Guide
DLL Assembly
In R18 initial release, the two DLLs required for DQI are built on .NET Framework 4.5.1, targeting SQL
Server 2016. Additional steps of code signing with digital certificate will be required for installation on SQL
Server 2017. These DDLs are listed below:
Old process
The old import process until Release 2014 relied on the Source Schema configuration table, and also on
the following SQL Stored Procedures:
s_InsightImportSystemTables_Update
s_InsightImport_Update
s_T24TableFromSelection_Create
s_T24SourceSchema_Update
These stored procedures are now completely deprecated in R17 but were still available for backward
compatibility until Release 2016.
Tables
SourceSchema
The table InsightImport..SourceSchema contains a list of all files extracted from Temenos Core banking
needed for the Analytics ETL process and should be configured before the first Analytics ETL is executed.
Records in the SourceSchema table are used to control the setup of data files that need additional
processing (i.e. parsing multi-valued or sub-valued columns and data type conversion).
There should be a record for each table being imported from Core banking. Please note that there may be
more than one CSV file per Core banking table being imported but you still only need one record per source
system table, not per CSV file.
From R19, SourceSchema includes a flag that allows the database administrator to configure a table for
online processing. It should be noted that any table that has an API needs to be configured for batch
extraction only (with the exception T24 metadata tables such as 'LOCAL_REF_TABLE', 'LOCAL_TABLE' and
'STANDARD_SELECTION').
The list of table names in the SourceSchema table should match the list of table names in the
InsightLanding..ExtractList table. Below is a description of each column in the SourceSchema table
and how it should be populated for each record.
Page 31 | 335
Advanced Analytics Platform Technical Guide
Page 32 | 335
Advanced Analytics Platform Technical Guide
Page 33 | 335
Advanced Analytics Platform Technical Guide
Insight.SourceBusinessEntities
This table is used to create a list of Core banking companies that CSV files must be imported for and should
be configured before the first Analytics ETL is executed.
Insight.ColumnOverride
This table is used to rename or change the data type of columns and should be configured before the first
Analytics ETL is executed.
Page 34 | 335
Advanced Analytics Platform Technical Guide
Insight.SourceColumns
This table is used to store the list of columns in each table imported in InsightImport.
Insight.Attributes
This table stores the data dictionary compiled from Insight.SourceColumns, T24StandardSelection,
SourceSchema, Insight.DataProfiles, Insight.ColumnOverride.
Page 36 | 335
Advanced Analytics Platform Technical Guide
Insight.Entities
This table contains a list of tables to be processed and it is populated by the
s_InsightImportTableList_Create stored procedure 4.
4 Please note that between Insight.Entities and Insight.Attributes, there is a foreign key relationship
defined however no cascade-update is in effect. Updates made in Entities must be manually syncronized
to Attributes with transaction protection.
Page 37 | 335
Advanced Analytics Platform Technical Guide
Insight.KeyColumns
This table is the source of any columns which are not in the Source System Data dictionary i.e. (the Core
banking Standard Selection table copied to Insight.T24StandardSelection in InsightImport) and which are
used to create key columns on a global (All tables) basis. This table should be configured before the first
Analytics ETL begins.
From R19, a new record definition for the TIME_STAMP column has been added for all table types to
facilitate the DW Online integration.
Page 38 | 335
Advanced Analytics Platform Technical Guide
Insight.DataProfiles
This table stores the results of the data profiling process and it is updated by Insight.s_DataProfile_Create
only.
Page 39 | 335
Advanced Analytics Platform Technical Guide
Insight.ImportSchema
The new InsightImport.Insight.ImportSchema table serves as an important role in Analytics ETL, Process
Data ExStore and the DQI process. In fact, Insight.ImportSchema is the only ‘blueprint’ table used to create
table structures during Data Quality Import. The information about column schema not only ultimately
determines how the table structure is created, but also used as the definitive quality standard for DQI to
determine the final data type for a field value in question. As long as a table is listed in ImportSchema, it
will subject to DQI check during Analytics ETL/Process Data ExStore. Any raw value not conforming to the
specified data type or violating the nullability condition is subject to DQI correction. Except the three system
columns (i.e. ETLD_DQ_RevisionCount, ETL_DQ_ColumnsRevised and ETL_DQ_ErrorMessage), all column
names in an InsightImport data table must be registered in ImportSchema. Otherwise, any unknown
column that has not yet been registered in ImportSchema, is going to be ignored during DQ Import. Any
base table not listed in ImportSchema undergoes Data Profiling Import (DPI) instead of the DQI process.
For this reason, now rows-set will be shipped with ImportSchema as default, i.e. this table will be initially
empty. Then, the initial run of Analytics ETL will operate DPI to create the first set of base table structures.
These initial table structures are then automatically collected into ImportSchema as the drafted import
table structure. Such Data Profiler-based import table structures are only meant for the initial data surveyed
and may not continue to be suitable for the future imports. Therefore, a dedicated resource should
thoroughly review the table structures concluded from the DPI results and make necessary changes to the
drafted column types according to the true meaning of each column’s business logic. Usually, after a couple
runs of revisions, table structure should become stable for long-term ETL operation. The column schema
Page 40 | 335
Advanced Analytics Platform Technical Guide
information within ImportSchema is the definitive quality standards for DQI to create the base and sub
tables, and import data into them.
Important Note: new entries should not be manually entered in this table but
existing column data profile definitions can be deleted to trigger new data profiling
for a table (e.g if new locally developed columns have been added to the table).
Comparing the combined roles of Insight.Entities and Insight.Attributes with ImportSchema’s, the
difference is that the former are only used in the DPI operations that look after data being surveyed –
where irregular data may cause unexpected fluctuations. The latter, however, allows human intelligence to
decide to either accept or override the DPI results – this is more suitable for long-term ETL operation.
In case of absence of related CSV files during the initial Analytics ETL/Process Data ExStore execution,
some empty base tables may still be created as conceptual structures based on the information given in
StandardSelection and SourceSchema. In this empty table scenario, the column schema information is not
collected into Insight.ImportSchema and it will not be processed by DQI until next import is done with a
complete set of CSV splits.
In certain cases, DQI may also get involved with DPI during the initial ETL Import for base tables. In the
event that DPI-Bulk-Insert failed twice, DQI will take over and handle the situation.
Insight.CsvColumns
Each and every CSV split is surveyed in a multithreaded CLR procedure called s_CollectCsvHeaderColumns.
The results of this split process are collected in the Insight.CsvColumns table to create individual views for
each CSV split during Data Quality import. This table contains one row per column per CSV file instance.
Insight.Numbers
This table holds some consecutive integers that are used during ETL import. Numbers creation in R18 is
done through recursive CTE instead of cross-join from the system tables, as carried out in earlier releases.
Page 42 | 335
Advanced Analytics Platform Technical Guide
Insight.T24LocalRefs
This table presents a full list of local reference columns including information about the source system table
they originate from, their label and target table in Analytics and the characteristics of these local reference
fields e.g. data type, available values etc. This view combines information from Local_table and
Local_ref_table.
Insight.T24LocalRefsWithSS
This table presents a full list of local reference columns as per what described in the Temenos Core Banking
STANDARD.SELECTION table (imported in the Analytics Platform through the associated
STANDARD_SELECTION CSV file).
Page 44 | 335
Advanced Analytics Platform Technical Guide
Insight.T24StandardSelection
This table displays the content of the Standard Selection table, imported from the Temenos Core banking
system and containing source data dictionary information. Each row corresponds to the definition of one
Temenos Core Banking table column.
Page 45 | 335
Advanced Analytics Platform Technical Guide
Page 46 | 335
Advanced Analytics Platform Technical Guide
Insight.T24SubAttributeSourceData
This table displays the list of columns resulting from the parsing of multi values, multi value sub values,
local references and local reference sub values. These attributes will be stored in the so-called ‘sub’ tables,
whose details will be also stored in this table.
Page 47 | 335
Advanced Analytics Platform Technical Guide
Insight.T24MultiValueSubValueTables
This table stores the list of multi-value (MV) and multi-value sub-value (MVSV) tables to be created in
InsightImport during the MV and MVSV parsing phase of ETL. The content of this table is the combination
of the entries listed in the v_T24MultiValues and v_T24MultiValues materialized views (that will be discussed
later in this chapter).
This table is populated automatically the first time ETL runs, based on the content of the SourceSchema
table. If the T24MultivalueAssociation flag is set to “Yes” for a table, all the child tables required for parsing
all the multi-values or multi-values sets within the parent table will be listed in
Insight.T24MultiValueSubValueTables. If the T24SubvalueAssociation flag is set to “Yes” for said table, also
all the child tables required for parsing all the multi-values sub-values or multi-values sub-values sets within
the parent table will be included here. The MV and MVSV will be determined programmatically by ETL,
based on the content of the FIELD_NAME_TEMPLATE field in the STANDARD_SELECTION entry for the
parent table. Once all the MV and MVSB parsing tables have been listed in
nsight.T24MultiValueSubValueTables, the database administrator can choose to activate or deactivate the
actual parsing process performed by ETL on a specific child table through the Active flag or they can rename
the child MV or MVSV table through the InsightTableName column, as discussed below.
Page 48 | 335
Advanced Analytics Platform Technical Guide
Page 49 | 335
Advanced Analytics Platform Technical Guide
Online.OnlineOutput
The table InsightImport.Online.OnlineOutput is an internal control table used for micro batches. It is
populated by DW Online insert events and has a list of all records that need to be pushed through DQ,
Multi-valued and Sub-valued parsing.
Views
Insight.v_ImportFileList
This view is on the Hash Total table and potentially other file list tables.
Insight.v_T24FormLabels
This view uses the VERSION table from Temenos Core banking to extract meaningful headers and column
labels from locally designed screens.
Insight.v_T24LocalRefs
This view presents a full list of local reference columns including information about the source system table
they originate from, their label and target table in Analytics and the characteristics of these local reference
fields e.g. data type, available values etc.
Page 50 | 335
Advanced Analytics Platform Technical Guide
Insight.v_T24LocalRefData
This view combines all local ref metadata in one place.
Insight.v_T24LocalRefSubvalueData
This view combines all local ref sub value metadata in one place.
Insight.v_T24MultivalueData
This view extracts its values from the materialized T24MultiValues and T24MultiValueSubValueTables
tables. It will only show tables that have the value of the Active flag set to 1 in
T24MultiValueSubValueTables5.
Insight.v_T24MultivalueSubvalueData
This view extracts its values from the materialized T24MultiValueSubValues and
T24MultiValueSubValueTables tables. It will only show tables that have the value of the Active flag set to
1 in T24MultiValueSubValueTables 6.
Insight.v_SubAttributeSourceData
This view combines the above Subtable metadata into one view.
Insight.v_Attributes
This view combines information from the Insight.Entities and Insight.Attributes tables.
Insight.v_T24StandardSelection
This view displays the content of the Standard Selection table, imported from the Core banking system and
containing source data dictionary information.
Insight.v_AttributeSourceData
This view combines all of the above for final Merge into the Insight.Attributes table.
Functions
fn_T24MultivalueData
This function takes values from the materialized T24MultiValues and T24MultiValueSubValueTables tables.
It will only show tables that have the value of the Active flag set to 1 in T24MultiValueSubValueTables 7.
5
Previously, this view was designed to take MV columns from T24MultiValueAssociation and split them
using functions, then datatype them based on the value of the T24SubValueAssociation field in
SourceSchema that leads MVSV columns to have SS specific datatype instead of nvarchar(max).
6
Previously, this view was designed to take MVSV columns from T24SubValueAssociation and split them
using functions.
7
In pre-R19 releases, this function was designed to take MV columns from T24MultiValueAssociation and
parse them, then datatype them based on the T24SubValueAssociation field in SourceSchema that leads
MVSV columns to have SS specific datatype instead of nvarchar(max).
Page 51 | 335
Advanced Analytics Platform Technical Guide
fn_T24MultiValueSubValueData
This function takes values from the materialized T24MultiValueSubValues and
T24MultiValueSubValueTables tables. It will only show tables that have the value of the Active flag set to
1 in T24MultiValueSubValueTables8.
fn_GetPendingSubTables
This function takes values from fn_T24MultivalueData and fn_T24MultiValueSubValueData for MV and
MVSV tables9.
Insight.s_ImportBaseTables
Description
This is the controlling procedure used in ETL Agent job to import the base tables from CSV file in parallel
with DQI (Data Quality Import) or/and DPI (Data Profiler Import). With the input parameter @TableName
set to ‘ALL’, it will import every single base table enabled in the SourceSchema table. When assigning a
specific table name to this parameter, instead, the procedure will only import the particular table mentioned.
When importing a new base table for the first time, the procedure normally follows the DPI path to get a
first impression about the column schemas to use and this initial evaluation takes a slightly longer time to
complete. During the DPI, the Insight.ImportSchema table is populated with definitions for the existing
columns and attributes. Afterward the first Analytics ETL, the quicker DQI path is taken. Unless the column
schema information is removed the Insight.ImportSchema table, the target base table’s structure remains
immutable for day-to-day ETL and so long as there is at least one non-system column of this table exists
in Insight.ImportSchema, DQI will be always in effect.
We should note that Insight.ImportSchema includes definitions for system tables and regular non-system
tables. Non-system regular tables are always dropped and created from the column definition table
Insight.ImportSchema; however, re-creating the system tables can be optionally skipped (in debug mode).
For any known table already listed in Insight.ImportSchema, this procedure tries straight BULK INSERT
first during the in DQI process, without Data Profiling. In case of failure due to corrupted or incompatible
data, Data Quality CLR procedure will take over to enforce Data Quality rules. For any new table that is not
defined in Insight.ImportSchema, procedure will call the classic dynamic import with Data Profiler to process
it, then enlist the newly profiled table/columns in Insight.ImportSchema. To continue to parse and import
multi-valued sub tables after finishing base tables import, the s_ImportSubTables should be used instead.
8
Previously, this function was designed to take MVSV columns from T24SubValueAssociation and parse
them.
9
Previously, this function was designed to take MV columns from T24MultiValueAssociation and MVSV
columns from T24SubValueAssociation then parse them.
Page 52 | 335
Advanced Analytics Platform Technical Guide
Steps
1. Loop through the companies in InsightImport.Insight.SourceBusinessEntities to create system
tables (master company is prioritized)
2. Creates system objects and updates Insight.CsvColumns
3. For regular tables for which a CSV is available, perform multithreade bulk insert with Data Quality
(using an empty shell if a definition in Insight.ImportSchema is available)
Inputs
CsvDir – File system path where the DW.Export CSVs are located. This should be all CSVs in the
same folder even if it is multi-company with different CSVs from different companies as they
should have different prefixes (example: BNK, CO1, CO2).
Insight.s_ImportDataReportErrors
Description
This procedure is in charge of updating InsightImport logs in case any error occurs while data is being
loaded into InsightImport. This stored procedure calculates a count of records in all tables, compares the
record total to Hash Totals for all Base Tables and it throws an error if there is a mismatch. Also, it adds a
row to the Updatelog table for each bad table.
Steps
Calculates a count of records in all tables
Inputs
TableName – defines the name of the table for which information has to be logged
BatchNum – as described previously
Page 53 | 335
Advanced Analytics Platform Technical Guide
Insight.s_ImportSubTables
Description
This is the controlling procedure used in ETL Agent job to parse and import the ‘sub’ tables using either
DQI (Data Quality Import) or DPI (Data Parser Import). ‘Sub’ tables are multi-valued, sub-value tables,
local references and local references sub value (abbreviated as MV, MVSV, LR or LRSV tables, respectively).
For a new ‘sub’ table whose structure has never been registered in ImportSchema, the slightly lengthier
DPI approach is taken to parse and import the data. Once the table’s definition is registered in
Insight.ImportSchema, and after user has reviewed and approved this new column schema information,
the subsequent ETL runs will take the DQI approach to parse and import the data for this table.
Some major improvements have been applied in R18 on sub table parsing, i.e.:
Multithreading is performed at fine granularity that is on the row level. On the contrary, paralleling
in R17 was performed at relatively coarse level—logic unit of combined table and procedure as the
granularity. Such improvement further boosts the efficiency of using computing resources;
DQI works hand-in-hand in memory with sub table parsing threads. Without caching large sets of
data anywhere, each and every multi-value is streamlined from the raw stage to the parsed stage
then to the data quality-assured stage and at last bulk-copied to the target table. Since
multithreaded in-memory parsing is usually faster than disk output, sub table parsing now can go
almost as fast as the disk can get;
The dedicated store procedure has been simplified. There is only one SQL-CLR procedure
(s_DQParseMulti-value) to call now in R18 for all kinds of sub table parsing.
Steps
1. Loop through Insight.Entities to select ‘sub’ tables to be processed
2. Checks if the table considered has a record in Insight.ImportSchema.
3. If a record exists, the process skips data profiling for the table, prepares an empty shell for the
sub table based on the Insight.ImportSchema definition and then populates it with the results
of the parsing process.
4. If no Insight.ImportSchema definition exists for the table, data profiling is performed and MV.
MVSB, LR and LRSV data is parsed and populated in a new ‘sub’ table. A new
Insight.ImportSchema definition is created as a result of the data profiling process.
Inputs
TableName – specifies a multi-value, sub-value, local reference or local reference sub value table
name to parse and import. If this parameter is set to 'ALL' or NULL, it will parse and import all the
requested type of sub tables, while we can restrict the parsed table to just one by specifying its
name in this input parameter.
Table Type – defines the type of table to be processed. The value for this parameter can be set
to ‘MV’ for multi value, ‘MVSV’ for multi value sub value, ‘LR’ for Local Reference (‘Local Ref’ is also
a valid entry) or ‘LRSV’ for local reference sub value.
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
Page 54 | 335
Advanced Analytics Platform Technical Guide
TotalThreads – if set to NULL, 0 or below, it defines the maximum number of available CPUs. If
it contains a number between 1 and 200, it is used to manually assign the number of threads used
to execute the stored procedure, instead.
OnlineProcess – Default is batch. When value 1 it does only process for tables marked as
OnlineProcess in Entities
Insight.s_T24ConsolKeys_add
Description
This procedure parses Consolidation keys for GL-related tables like RE_CONSOL_SPEC_ENTRY and any CRF
file.
Steps
This stored procedure compiles a list of columns based on the header row in each relevant CSV
file. Keeping this separate from Import Tables even though lots of duplicate code to allow for future
changes/ flexibility
Columns stored into a temporary file called ListOfColumns
Columns are then parsed and loaded from Temp table into Final Table i.e.
Insight.SourceColumns.
Logs the process outcome
Inputs
TableName – defines the name of the table to be parsed (exact table name or ‘All’ are
acceptable entries)
Insight.s_BuildCOA
Description
This stored procedure parses the content of any imported CRF table and uses this data to to build a Chart
of Account table called COA_Structure that resides in InsightImport. Insight.s_BuildCOA has no controllable
input parameters
Steps
This stored procedure selects the names of CRF reports from ExtractLists and counts them. If
General Ledger and Profit & Loss CRF files are separated, this process is iterated twice
CRF reports lines are extracted and parent lines are identified for each child line
BS_GROUP (Balance Sheet group) is identified for each line
The COA_Structure table is formed and populated
Page 55 | 335
Advanced Analytics Platform Technical Guide
Insight.s_SystemObjects_Update
Description
This Stored Procedure updates the Hash_total table and the Standard Selection. As previously mentioned,
the Hash_total Table lists all the CSV files to be imported, while Standard Selection lists the source system
metadata for each column.
Insight.s_SourceColumns_Update
Description
This stored procure populates the table Insight.SourceColumns with all the column names in each CSV to
be imported.
Steps
Bulk inserts the top row of each CSV into a Temp table.
A string splitter function splits the row into many rows, one per field.
The end result is written into the Insight.SourceColumns table, with tablename and
columnname.
Inputs
PathName – as described previously.
TableName – the table for which a column list must be created
BatchNum – as described previously
Page 56 | 335
Advanced Analytics Platform Technical Guide
Insight.s_Object_Create
Description
This procedure creates a table or a view given a list of columns.
Steps
Using dynamic SQL construct and optionally execute Create Statement.
Inputs
ListOfColumn – table valued parameter of the list of columns and their data types, (null value
is acceptable)
ExecuteCreate – should the statement be executed or just returned. Acceptable values are 1 =
Execute or 2 = Create
ObjectType – type of object to be created. Acceptable values are 1 = Table or 2 = View
Insight.s_ObjectsFromList_Create
Description
This procedure creates a given list of tables or views.
Steps
Loops through tables, gets a list of columns from Insight.Attributes and call Insight.s_Object_Create to
create the table.
Inputs
ListofTables – table valued parameter of the list of tables to be created.
CreateVarcharMaxTables – creates untyped placeholder tables for initial Bulk Insert so that
data profiling can be done.
Insight.s_ImportTable_Update
Description
This procedure Bulk inserts CSV files into their respective tables.
Steps
For each table, look at Hash_Totals and loop through and import all the CSV’s.
Page 57 | 335
Advanced Analytics Platform Technical Guide
Inputs
PathName – as described previously.
TableName – defines the table for which a column list must be created.
BatchNum – as described previously
TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs)
Insight.s_DataProfile_Create
Description
This procedure creates a data profile of all the tables to be imported. It determines what the correct
datatype of a particular column should be, and is used to update what the source system says the data
type should be.
Data profiling is relatively slow so in order to speed things up the records in a table are ranked in order of
Length and only the set of records which contain at least the maximum length for each column is returned.
To make things more efficient the above is done on the top 50 percent of records.
Because only the top 50 percent of records is profiled there is a risk that the profile will not reflect reality,
if the eventual data insert fails due to this then the data profile is automatically run again with all records.
However, if the insert does not fail there is a risk that the data types could be sub-optimal. The only way
around this is to data profile all records. This can be done with a very simple variable assignment in the
code of the stored procedure.
Steps
Dynamically creates a table with the following fields for each Imported table.
TableName, ColumnName, ColumnLength, ColumnMaxValue, ColumnMinValue, ColumnMaxLenValue,
ColumnIsNumeric
Based on the above table another table is created with the following fields, these steps are
separated for performance reasons.
TableName, ColumnName, ColumnMinValue, ColumnMaxValue,
ColumnMaxLenValue, ColumnLength, ColumnIsNum, ColumnIsBigInt,
ColumnIsDecimal, ColumnIsDate, ColumnIsNull, Added
Insert the results into the Insight.DataProfiles table.
Inputs
TableName – the name of the table to be profiled. E.g.AA_ARR_Account.
NumberOfSampleRowsInside – number of sample rows to be profiled
BatchNum – as described previously
Page 58 | 335
Advanced Analytics Platform Technical Guide
Insight.s_Attributes_Update
Description
This procedure updates the table InsightImport data dictionary with datatypes and column information
from various sources i.e. the list of columns to be created is derived from the header row of source CSV
files from Core Banking, data dictionary information is taken from Standard Selection and the Analytics
User Data Types are retrieved and then used to override existing data type (actual Data Types are
calculated during Data Profiling by s_DataProfile_Create).
Steps
Merge Data from InsightImport.Insight.SourceColumns
Merge Data from T24StandardSelection
Merge Data from SourceSchema, Primary Index Columns and Rebuild from selection
Inputs
TableName – the name of the table to be profiled. E.g.AA_ARR_Account.
Insight.s_T24AllMultiVvalue_add
Description
This procedure parses all types of multi-value tables: Local Ref’s, Local Ref Sub-Values, Multi-values and
Multi-value Sub-Values.
Steps
Create temporary table based on the Entities and Attributes tables.
Call one of two CLR’s to parse the multi-values. s_ParseStringToColumns for Local Ref’s, and
s_ParseMultiStringToColumns for all other sub-tables.
Load from Temp table into Final Table.
The above is called from Insight.s_Import_Control so the first load is into the final table with
nvarchar(Max) columns and subsequent loads are into the properly typed tables.
Inputs
TableName – defines the name of the table to be profiled. E.g.AA_ARR_Account_Localref.
IsGenericTyping – this parameter can be set to 1 for the Data Profiling, so NVarChar(MAX) is
used for each column rather than the actual data types
BatchNum – as described previously
Page 59 | 335
Advanced Analytics Platform Technical Guide
Insight.s_Import_Control
Description
This procedure is internally invoked by s_ImportBaseTables and s_ImportSubTables to manage the process
to Bulk Insert from CSV files to SQL.
Steps
For each table being run:
Execute Insight.s_ImportSystem_Update
Truncate or Drop tables depending on whether @Recreate tables = 1 is specified.
Create a list of tables to be imported and/or created using the InsightTables table created
previously.
Execute Insight.s_SourceColumns_Update
Execute Insight.s_Attributes_Update
Set @Changes flag based on contents in ##MetaDataChanges, populated in s_Attributes_Update.
Set @CSV exists flag, is there a CSV for the table.
Set @DataProfileDone Flag.
If (@RecreateTables = 1 Or @Changes = 1 Or @ErrorFlag = 1)
Create New Table with Nvarchar(Max) datatypes.
Create View on the above table, Bulk Insert is done into the view because the table has more
columns than there exists in the CSV file. The view has the same column list and order as the CSV
file.
Execute Insight.s_ImportTable_Update
Update Insight.SourceColumns table with column ID’s from Sys.Columns.
If data profile has not been done create a data profile for the table.
Execute Insight.s_Attributes_Update
Once the data profile is done, typed tables can be created.
Execute Insight.s_ObjectsFromList_Create to create Typed Tables based on the updated
Insight.Attributes.
Execute Insight.s_ImportTable_Update, to insert CSV into properly type table.
If there is an error doing the above, go back and recreate Nvarchar Max tables, and start the
process again.
If nothing has changed from a previous day's load, Insight.s_ImportTable_Update will be run
after it is determined there are no changes.
Inputs
PathName – as described previously.
TableName – name of the table to be processed. Acceptable entries are either the exact name of
the table to be processed, e.g. ACCOUNT, or ‘All’
ReCreateTables – defines if data tables should be re/created. Acceptable entries are: 1 recreates
tables, 0 loads into existing table if data profile has been done
TableType – defines the type of tables to be processed. Acceptable entries are: Regular, Local
ref, LRSV (local ref sub value), MV (multi-value), MVSV (multi-value sub-value). It should be noted
that, within Analytics ETL, s_Import_Control will be re-executed five times so that each available
Table Type is processed.
Page 60 | 335
Advanced Analytics Platform Technical Guide
SystemTablesExists – defines if system tables should be re/created which involves a new data
profiling process. Acceptable entries are: 1 Create System tables, 0 do not create system tables, 2
create system tables when sub-tables are being created.
BatchNum – as described previously
TotalThreads – as described previously
Insight.s_CreateImportViewForCsv
Description
This procedure creates a view of the base table for individual CSV, with columns aligned up for straight
Bulk-Insert without a format file
Inputs
CsvPath – File system path where the DW.Export CSVs are located. This should be all CSVs in
the same folder even if it is multi-company with different CSVs from different companies as they
should have different prefixes (example: BNK, CO1, CO2).
Insight.s_CreateEntities
Description
This procedure procedure creates a list of tables to be created as well as the first CSV file to be called. It
replaces s_Entities_Create (abbreviated as SEC) when importing CSVs with Data Quality as it displays better
performances. However, SEC has not been deprecated, since it's still in use for dynamically importing CSVs
with Data Profiler (without Data Quality).
To better understand the relationship between Data Quality Import (performed by s_CreateEntities) and
Data Profiler Import (performed by SEC), we should note that the three configuration tables involved in
these processes are Insight.Entities, Insight.Attributes and Insight.ImportSchema. Data Quality Import
(DQI), does not use Attributes (but does use Entities) since DQI is assuming all table structures and
attributes are as fixed as the definition in Insight.ImportSchema. For non-system tables, any incoming data
not being able to convert to the predefined data type is subject to Data Quality Correction rules and no
runtime error will be thrown.
Data Profiler Import (DPI) uses both Entities and Attributes (but not Insight.ImportSchema) to perform
dynamic import. Related procedures are: s_Entities_Create and s_Attributes_Update. If any corrupted or
incompatible source CSV data is encountered, bulk-copying to target table will result in a terminating
runtime error.
Page 61 | 335
Advanced Analytics Platform Technical Guide
Under DQI, when a new table (not listed in ImportSchema) is brought in, it reaches out to DPI procedures
to perform dynamic import. Then the structure of newly imported table, determined by Data Profiler, is
collected into ImportSchema. Unless going thru another manual reviewing and revising process, the content
in ImportSchema basically becomes the blueprint for import tables.
Steps
Updates Insight.Entities with system tables if necessary
Update Logs
Inputs
Operation – defines the type of tables to be processed and can have the following values:
o [0:All ] - All entities/tables including the following 1, 2 and 3;
o [1:System] – Temenos Core Banking configuration/ dictionary tables including:
dbo.STANDARD_SELECTION, dbo.HASH_TOTAL, dbo.LOCAL_TABLE and
dbo.LOCAL_REF_TABLE;
o [2:Base ] - InsightImport base tables excluding the four system tables indicated in option
1;
o [3:Sub ] - InsightImport ‘sub’ tables including: LocalRefs (LR), LocalRefSub-values
(LRSV), Multi-values (MV) and Multi-valueSub-values (MVSV)
BatchNum – as described previously
Insight.s_Entities_Create
Description
This stored procedure creates a list of tables to be created as well as the first CSV file to be called. Also, it
logs the process times.
Steps
Merge data from Hash_Totals
Merge data from SourceSchema
Inputs
This stored procedure uses the InsightImport..Hash_Totals table to define the list of entities to be created
and the only input parameter is:
Page 62 | 335
Advanced Analytics Platform Technical Guide
Insight.s_CreateImportTableStructure
Description
This stored procedure is used for DQI (Data Quality Import) to cast an empty table structure based on the
corresponding Insight.ImportSchema’s definition then replace the existing data table, if found in
InsightImport.
Inputs
TableName – The target table that must be processed.
BatchNum – Batch number used to execute the stored procedure.
Insight.s_CreateImportViews
Description
This stored procedure is used to create individual views for each and every CSV split. DQ Import can handle
both physical tables and views as the targets. During Analytics ETL and Process Data ExStore, the import
process normally uses views as DQI targets because of their simpler column mapping.
Inputs
TableName – The name of table that must be processed.
Insight.s_CreateSystemObjects
Description
This stored procedure updates the Hash_total table (Table that lists all the CSV files to be imported) and
Standard Selection (Table that list the source system metadata for each column). It takes a list of
Companies or Entities as an input, it loops throught the companies, importing all the aforementioned files
for the company. From R19, this stored procedure has been updated to materialize the v_T24MultiValues
and v_T24MultiValueSubValues views, then merge their contents and insert it in
T24MultiValueSubValueTables table.
Inputs
CsvDir – File system path where the DW.Export CSVs are located. This should be all CSVs in the
same folder even if it is multi-company with different CSVs from different companies as they
should have different prefixes (example: BNK, CO1, CO2). The syntax for the path is
‘DriveLetter:\Folder\Subfolder\’. Example ‘E:\DW.Export\’
Page 63 | 335
Advanced Analytics Platform Technical Guide
Insight.s_DQ_ImportTable
Description
This stored procedure tries to bulk-load multiple CSV files in parallel into the target (base/regular) table.
Target table's structure is casted out of the template table (i.e., Insight.ImportSchema). In case of failure
due to incompatible data (type), affected areas are cancelled and Data Quality CLR procedure takes over
and handles it, according to the pre-defined data quality rules. If specific rule is never defined to the subject
column, generic or default rules are applied. In concurrent streaming process, DQ procedure resolves the
source-target columns mapping, bulk-transfers and corrects data with applicable rules on-the-fly. Resulting
fields and revision detail are outputted on the same row.
Inputs
CsvDir – File system path where the DW.Export CSVs are located. This should be all CSVs in the
same folder even if it is multi-company with different CSVs from different companies as they
should have different prefixes (example: BNK, CO1, CO2). The syntax for the path is
‘DriveLetter:\Folder\Subfolder\’. Example ‘E:\DW.Export\’
Insight.s_ReportFailingColumns
Description
This stored procedure takes a source and destination table and returns, assumes that the all source columns
will be inserted into similarly named destination columns, returns failing columns.
Inputs
TableName – The target table that must be processed.
dbo.s_MergeKeyColumnDQRules
Description
This stored procedure is used, within the Data Quality Import (DQI) process, to merge and apply data
quality rules, especially for primary key columns. Null values in such columns need to be addressed before
converting them to primary keys. Typically, NVarChar-typed columns are revised as ‘{NULL}’, Int-typed
columns are revised as -1 and Date-typed columns are revised as ‘1900-01-01’.
Inputs
TableName – The target table that must be processed.
TenantId – Id of the database tenant
BatchNum – Batch number used to execute the stored procedure.
Page 64 | 335
Advanced Analytics Platform Technical Guide
Insight.s_CollectSchema
Description
This stored procedure collects a table’s column schema information from Insight.ImportSchema. It should
only be used within the InsightImport database’s scope.
Inputs
TableName – The target table that must be processed.
SchemaName – name of the schema of the table for which the stored procedure will be executed
Insight.s_ DQ_ImportTable
Description
This stored procedure handles the multithreaded DQI process. It operates the data quality-related SQL-
CLR or T-SQL procedures, or functions to import a specified base table from the CSV file splits that are
placed under a directory.
Inputs
CsvDir – File system path where the DW.Export CSVs are located. This should be all CSVs in the
same folder even if it is multi-company with different CSVs from different companies as they
should have different prefixes (example: BNK, CO1, CO2). The syntax for the path is
‘DriveLetter:\Folder\Subfolder\’. Example ‘E:\DW.Export\’
Insight. s_TableHasDQIssue
Description
This stored procedure is called after the initial ETL import to find out whether there is any DQ issue in the
selected table. This store procedure can return the following values:
Page 65 | 335
Advanced Analytics Platform Technical Guide
Online.s_ProcessImportOnline_Update
Description
This procedure is used to pull near real time Online data into InsightImport dbo schema to apply DQ, Multi-
valued and Sub-valued parsing rules. Once data has been staged is loaded into the Online schema in
Insight Landing.
Micro batches are run by calling this procedure at a set configurable interval. When all the DW Online
intra-day data has been pushed through the Online EOD process moves the data from Landing Online
schema into Landing BS schema and a new Online processing day commences.
Steps
1. Start a new Online micro batch if there is no active one
2. Obtain list of tables to be processed based on new DW Online records
3. Process Online changes to T24/Temenos Core Banking metadata tables
4. Copy data from InsightImport Online Schema to InsightImport dbo schema
5. Do Data Quality checks as data is copied
6. Do Local Ref Parsing
7. Do Multi-value Parsing
8. Do Local Ref Sub-value Parsing
9. Do Multi-value Sub-value Parsing
10. Do Attribute Calculations-Import
11. Load data into InsightLanding Online Schema
12. Validate all intra-day records for all companies have been processed by DW Online. Then Start
Online EOD process
13. Update DW Online Processing agent job freq_interval if changed in SystemParametersLanding
Inputs
Source Name – Used to pass the name of the source system to be process. In this case it is BS
TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs)
Online.s_InsightImport_Online_Tables_Create
Description
A system stored procedure that is used to create the InsightImport Online schema tables based on
STANDARD_SELECTION metadata
Inputs
Table Name – Name of table to be created. Acceptable values are the table name or the keyword
‘ALL’
RecreateTable – Flag used to force the dropping and recreation of a table. Acceptable values
are ‘1,’, ‘0’ or null
Page 66 | 335
Advanced Analytics Platform Technical Guide
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
Online.s_ImportSystemTables_Update
Description
Online changes to T24/Temenos Core Banking metadata tables ( 'LOCAL_REF_TABLE', 'LOCAL_TABLE',
'STANDARD_SELECTION') are handled by this store procedure
s_InsightImportSystemTables_Update (Deprecated)
Description
This procedure handles CSV files from multiple companies exported separately with different hash total
files. All Hash total files from the separate companies will be loaded into InsightImport at the end of this
procedure and they will be combined into the same Hash Total table. This table will be used to drive
s_InsightImport_Update.
Steps
1. Drops all tables in database for first company import when Drop Tables parameter set to 1
2. Bulk Insert into Hash_Total table from company specific Hash_Total CSV file
Inputs
Pathname – File system path where the DW.Export CSVs are located. This should be all CSVs in
the same folder even if it is multi-company with different CSVs from different companies as they
should have different prefixes (example: BNK, CO1, CO2).
The syntax for the path is ‘DriveLetter:\Folder\Subfolder\’. Example ‘E:\DW.Export\’
Mnemonic – This is the company mnemonic prefix of the file. As there should only be one set of
CSV files per company in the path, and therefore only one hash total file, this is indicating which
company is being imported. The syntax for this Mnemonic should match exactly to Core banking
Company Mnemonics (IE. BNK, or GB1).
DropTables – If Drop Tables is set to yes, then all tables will be dropped/truncated as
necessary. Only the first company in the set (BNK) should have this set to 1. This will allow one
or more companies to be imported after the first company without the existing records in the
Hash Total table being dropped. When the first company is added for a new import all records
from the Hash Total table should be dropped.
s_InsightImport_Update (Deprecated)
Description
Once all hash total files have been loaded by s_InsightImportSystemTables_Update,
s_InsightImport_Update can be invoked with the same Pathname used for
s_InsightImportSystemTables_Update. This will run through each file listed in the Hash Total table and
import it into the InsightImport database, all as generic nvarchar(max) datatypes.
Steps
Page 67 | 335
Advanced Analytics Platform Technical Guide
s_T24TableFromSelection_Create (Deprecated)
Description
Sometimes certain tables are blank in Core banking on a particular date. For example, a transactions table
with a low volume of transactions if exported per day may not have records on a particular date. When
DW.Export runs its extract it does not create a file for these tables. This causes a problem downstream in
the ETL process. If a mapping view is dependent on that table, for example, it would fail if missing. So that
ETL can process without failure regardless of the availability of these tables we need to create a blank
table.
This procedure creates blank tables based on T24 Standard Selection metadata describing the table
columns and data types, as well as your Source Schema configuration. Only tables with
T24RebuildFromSelectionIfMissing set to ‘Yes’, and also not having any table exported from DW.Export will
be created.
This procedure also properly data types each column in each table by creating a table based on its Standard
Selection metadata and copying data into it from the raw Core Banking-imported table from
s_InsightImport_Update.
Steps
1. Get list of missing tables InsightImport and T24 Rebuild From Standard Selection is set to yes in Source
Schema table and create
2. Determine proper data types for all columns and all tables in Import based on standard selection
3. Check if any data types defined in standard selection do not match data otherwise alter tables to have
proper data types
4. Create primary indexes on tables
Inputs
No Parameters are required for this procedure.
s_T24SourceSchema_Update (Deprecated)
Replaced by s_Import_Control which calls s_T24AllMultiVvalue_add.
Description
Page 68 | 335
Advanced Analytics Platform Technical Guide
Tables coming from Core Banking can have Local Reference, multi-value and sub-value fields on each
parent table. These are stored in columns and need to be parsed out into referential tables with a linking
unique key.
This procedure calls a custom CLR function to parse these records into their own tables and creates the
linking based on the PrimaryIndexColumn specification from the SourceSchema table.
Steps
Call a number of stored procedures:
1. s_T24Localref_add, create local ref tables based on Standard Selection definition
2. S_T24Localrefsub-values_add, parse multiple sub-values for local ref fields
3. s_T24multi-valueSub-value_add, parse multi-value fields based on source schema definition, parse
sub-values for multi-values
4. S_T24Consolkeys_add, parse consol keys.
Inputs
No Parameters are required for this procedure.
Additional Information
Multi-valued Files
Core banking uses a three-dimensional file structure called a “non-first normal form” data model to store
multiple values for a field in a single record known as multi-valued fields. A multi-valued field holds data
that would otherwise be scattered among several interrelated files.
Two or more multi-valued fields can be associated with each other when defined in the file dictionary. Such
associations are useful in situations where a group of multi-valued fields forms an array or are a nested
table within a file. You can define multi-valued fields as belonging to associations in which the first value
in one multi-valued field relates to the first value in each of the other multi-valued fields in the association,
the second value relates to all the other second values. Each multi-value field can be further divided into
sub-values, again obeying any relationships between fields.
Standard_Selection Table
STANDARD.SELECTION is the dictionary application for Temenos Core banking and each application must
have an entry in STANDARD.SELECTION. There is information about every field available in each of the
Core banking files. This table is extracted from Core banking and a copy resides in the InsightSource
database. Below is a classification of the type of columns in STANDARD.SELECTION.
System Multi-Values
Model bank columns that are essentially multiple columns in one column.
The columns are parsed into a separate table with multiple columns based on the definition in
SourceSchema.
System Sub-Values
Model bank columns that are essentially multiple rows in one column
Columns are parsed into a separate table with multiple columns based on the definition stored in
SourceSchema.
User Defined (Local Ref)
Local development columns that are contained in Local.Ref column
Data type conversion is done based on standard selection
The columns are parsed into a separate table with multiple columns based on standard
selection.
Excluding which columns to parse is defined in SourceSchema.
User Defined (Local Ref) Multi-Values
Local development columns that are essentially multiple rows in one column
Data type conversion is done based on standard selection
The columns are parsed into a separate table with multiple rows based on standard selection,
local_table and local_ref_table.
Excluding which columns to parse is defined in SourceSchema.
If we parse this local reference, a new table will be created called BNK_ACCOUNT_LocalRef in which each
value of the local reference will be allocated an individual column.
Below, we can see what happens when the Local ref has been parsed correctly by Analytics ETL. All columns
have proper core banking names and SQL server data types, based on metadata.
Page 70 | 335
Advanced Analytics Platform Technical Guide
Within the same example of parsed multi-value field, we can see that some of the new columns resulting
from the multi-value field in Core banking have, in turn, sub-values. In the example below, the
LENDING_OFFICER and LENDING_ROLE columns (resulting from the parsing of the LocalRef field in the
new ACCOUNT_LocalRef table) contain three sub-values each and are associated with one another.
Figure 11 - Example of parsed multi-value Local reference field where sub-values are unparsed
If we perform sub-value parsing on ACCOUNT_LocalRef, the result table of sub-value parsing will be stored
into a new table called, in this case, BNK_ACCOUNT_LocalRef_LENDING_OFFICER, as shown in the next
figure. LENDING_OFFICER is the name of the first of a series of associated multi-value fields, each of which
can contain multiple sub-values.
Page 71 | 335
Advanced Analytics Platform Technical Guide
System Multi-Values
In addition to the multi-values in Local reference fields, of which we have seen some examples above, Core
banking also has system multi-values, i.e. standard fields of standard Modelbank application which contain
multiple values.
The Standard Selection table from Temenos Core banking will define whether a specific field on a table is
a system multi-value or a single value field. If we run a query on the v_T24StandardSelection view and
check out the output, we can see some examples of system multi values – if we look at the
CONTACT.CLIENT, CONTACT.TYPE and CONTACT.STATUS field definitions below, we can see that the
value for the content type for all the columns is set to the system while the SingleOrMulti-value column is
set to ‘M’ for multi-value.
Once the Insight.s_Import_Control store procedure is run with TableType set to MV within the Analytics
ETL process, a new table called BNK_CCS_CR_CONTACT_LOG_Contact_Client will be created. This table
will contain multi-value parsing for CONTACT.CLIENT and its associated CONTACT.TYPE and
CONTACT.STATUS multi-value fields.
System Sub-Values
Once the BNK_CCS_CR_CONTACT_LOG_CONTACT_CLIENT has been created there may be sub-values
separated by a special character in the CONTACT_CLIENT column. In order to parse these sub-valued
fields to a new table, the T24Sub-valueAssociation column in the SourceSchema table for the
BNK_CCS_CR_CONTACT_LOG record has to be set to
‘BNK_CCS_CR_CONTACT_LOG_CONTACT_CLIENT|CONTACT_CLIENT’. Once the
Insight.s_Import_Control store procedure is run with TableType set to MVSV within the Analytics ETL
process, a new table will be created as BNK_CCS_CR_CONTACT_LOG_Contact_Client_Contact_Client.
Page 72 | 335
Advanced Analytics Platform Technical Guide
To be able to parse any of the LocalRef fields shown above to a new table, the T24ExtractLocalRef column
for the BNK_CUSTOMER record in SourceSchema has to be set to ‘Yes’. Once the Insight.s_Import_Control
store procedure is run with TableType set to Local ref within the Analytics ETL process, a new table will
be created as BNK_CUSTOMER_LocalRef.
The MV/MVSV split has a new parameter table, i.e. T24Multi-valueSub-valueTables, with a pre-
populated list of all MV/MVSV associated columns for all the tables defined in SourceSchema with
T24Multi-valueAssociation and T24Sub-valueAssociation set as ‘Yes’.
A parameter field has been added to SourceSchema to enable or disable the MV/MVSV parsing and
ETL splits those columns based on the new parameter table.
The new functionality was introduced to eliminate user errors occurring when any MV/MVSV-associated
columns that it is configured to be parsed is missing. The analytics system will list all the MV/MVSV-
associated columns in Source Schema and the user has to manually to enable column parsing by setting
the related flag in this configuration table. Based on the content of the latter, the stored procedure
s_CreateSystemObjects will populate the T24Multi-valueSub-valueTables table.
New functionality
Two new views and one new table will be created when Analytics ETL runs for the first time.
1. v_T24Multivalues
2. v_T24MultivalueSubvalues
3. T24MultivalueSubvalueTables
Page 73 | 335
Advanced Analytics Platform Technical Guide
The value of the FIELD_NAME_TEMPLATE field, from the Temenos Core Banking STANDARD_SELECTION
table, is used to get value marker of each field and to determine how it should be parsed. The meaning of
each value marker is listed below10.
Old process
Until R18, the database administrator had to manually configure the MV/MVSV to be parsed in Source
Schema and Analytics ETL was splitting only those columns. If a specific MV/MVSV parsing was defined in
SourceSchema and a corresponding CSV file did not exist, ETL would create the table and datatype it based
on STANDARD.SELECTION table.
On a subsequent run, if the CSV file exists for the MV/MVSV, the ETL process would try to insert the
MV/MVSV value into the previously datatype field and fail if the data typing/profiling had not been accurate.
That would trigger Data Quality (DQ) process to start and to apply the required replacements.
If a table had been configured with T24MultiValueAssociation and not T24SubValueAssociation but it
contained MVSV columns in T24MultiValueAssociation, DP would do the data typing/profiling based on the
T24SubValueAssociation field. This would fail and trigger DQ for replacements.
Configuration
Configuring Source Schema (Add/Remove/Configure Tables)
Using the Source Schema Definition in the dedicated section, which describes the different columns and
available syntax, each table to be imported must be listed and configured as per the definition of the table.
By default with Analytics, a certain pre-configured version of Source Schema will be provided to work on
an unmodified Model Bank configuration for Temenos Core banking. At this point, you will need to configure
SourceSchema to reflect what is different from your bank’s implementation of Core banking (at a table
level) from what Model Bank provides. Typically this means turning tables off by removing rows entirely or
adding new records for additional tables to be brought in.
Multi-Value, Sub-Value, and Local Reference tables should be configured within the parent table record in
Source Schema. They will not need separated rows because there are configuration columns for this on
the parent table row.
10There are language specific fields that will have additional prefix LL_ that need to be replaced to match
with actual SS fields.
Page 74 | 335
Advanced Analytics Platform Technical Guide
Ensure the desired table is being extracted from DW.Export (see DW.Export Configuration guide
for assistance)
Ensure desired CSV file is in the directory which ETL has been configured to read from. There
may be one or more CSV files for each file being extracted from Temenos Core banking.
InsightImport will handle this and there are no configuration changes needed if multiple CSV files
are present for a particular core banking file.
Add a record (if one does not already exist) in the Source Schema table, configuring
SourceSchema as per this table’s definition and syntax in the SourceSchema section.
Run the InsightImport procedures to test you have properly configured by reviewing the output.
Ensure the appropriate local ref and multi-value tables have been created and have been data
typed accordingly.
InsightImport can be configured to run as part of the Insight SQL agent job or ran manually via the stored
procedure. Most of the Import procedures are orchestrated by Insight.s_ImportBaseTables, to load the
content of CSV files to Import tables, and Insight.s_ImportSubTables. The latter must be executed different
times with different input parameters, to parse local reference fields, first, then multi-values, multi-value
sub-values and local reference sub-values.
Stored procedures execution should comply with the following order and input parameters. E.g.
USE InsightImport
Exec [Insight].[s_BuildCOA]
Logging
Logging for all InsightImport activities is currently being recorded in dedicated tables within the InsightETL
database.
Page 75 | 335
Advanced Analytics Platform Technical Guide
InsightLanding
Overview
InsightLanding is a multi-source archive used to land all source data together. It holds multiple days of
source data and it is the only relational database other than InsightWarehouse that does so in the Analytics
solution. Raw source system data can be consumed at this point by reports, various other types of Analytics
web contents or by other systems.
Any data that may need to be reprocessed into InsightWarehouse has to be retained in InsightLanding.
The primary purpose of Landing is to facilitate easy reprocessing of all historical data for Insight Warehouse
without the need to maintain or re-import separate copies of source system data (e.g. CSV files extracted
from core banking).
If batch processing is selected for a table, tables will have <source-system>.<table-name> schemas table,
e.g. BS.CUSTOMER or Budget.GLBudget, and they will be updated when the daily ETL is run with the latest
business data from the relevant source system. E.g. BS.CUSTOMER will load a copy of the latest content
of the BS.CUSTOMER table in InsightImport every day (which in turn is populated with the latest
CUSTOMER data from Temenos Core Banking). BS.CUSTOMER, like any other InsightLanding table, will
represent an archive of all the historical that has been loaded to Analytics via ETL. Users will be able to
query data for the appropriate business date from the BS.CUSTOMER table in InsightLanding filtering on
the MIS_DATE column.
If an InsightLanding table is flagged for online processing, the first load will also populate the standard
<source-system>.<table-name> schema table e.g. BS.CUSTOMER. However, after this first load, any intra-
day updates will be loaded into “Online” schema tables e.g. Online.CUSTOMER. These online tables will be
updated with any new records coming from near real-time tables in InsightImport after a specified time
interval. Furthermore, during the Online End of day (EOD) process the content of the online tables will be
moved to the associated <source-system>.<table-name> schema table. E.g. ETL will copy the intra-day
records from Online.CUSTOMER to BS.CUSTOMER, then erase the content of the former. When the next
batch Analytics ETL or Process Data ExStore runs intra-day, data will already be loaded.
InsightLanding includes a number of abstraction views for both batch and online tables (e.g. BS.v_Customer
and Online.v_Customer, respectively). While the former are based only on batch InsightLanding tables such
as BS.CUSTOMER, the latter will query a full set of all Extract List tables (including Batch only) and will be
a union of online deltas and previous business date batch. However, for STMT_ENTRY, CATEG_ENTRY,
RE_CONSOL_SPEC_ENTRY (account/GL transaction tables) only intra-day data will be shown in the Online
views
If a traditional batch update is selected for a table, data is landed using storage in columnstore format.
Tables storing intra-day updates, instead, will make use of row-base storage format.
Columnstore Index
Columnstore index is a Microsoft SQL Server technology for storing, retrieving and managing data by using
a columnar data format. This type of index stores data column-wise instead of row-wise.
Page 76 | 335
Advanced Analytics Platform Technical Guide
From R18, Temenos Analytics has adopted and implemented columnstore index technology in
InsightLanding. The main benefits of using columnstore index in InsightLanding is high compression rates
and high query performance gains over traditional row-oriented storage. Requirements for disk space have
been dropped by up to 90%.
All dated schemas are now changed to a single source named schema, for example BS for Banking System.
For instance at table from the core banking system would have a schema of BS.TableName and all historical
data for extracted dates will be stored in this table. All source tables that are landed are converted to
clustered columnstore index.
All source data is stored raw, exactly as it was extracted without any transformations with the exception of
core banking data which has additional tables created in InsightImport to deal with the multi-valued data
and added data types (see Configuration section of the InsightImport chapter for details about this
process). Additional calculated columns on source data could be added to individual tables as per rules
defined in the AttributesCalculation table and in the Rules Engine-related tables of the InsightETL database.
Temenos Core Banking multi-value fields can be larger than the biggest supported string type
(NVARCHAR4000) and would need to be stored as MAX data type which cannot be kept in columnstore.
When data load process encounters column values greater than the largest supported type a process is
triggered to move the column to row store (MaxString tables). Only unique columns values are stored in
the rowstore lookup with an associated hash value and the original table column values are replaced by
the hash lookup value. Rowstore lookup table is using the most efficient storage possible by only retaining
the unique values for each column that surpasses the supported data types
Reporting views join both the Columnstore index base tables and the lookup table making the split seamless
to the end user.
ETL batch control, multithreading process and all logging hasn’t changed in this release.
It is important to mention that processes that load data to InsightLanding using the dated schemas as used
in previous releases have not been sunsetted and are fully supported.
Page 77 | 335
Advanced Analytics Platform Technical Guide
Online Reports Source InsightLanding provides a number of new abstraction views under the
Online schema that can be used to feed reports with near real-time
snapshots of a table’s content. In case only intra-day data is needed
in a report, the data should be retrieved from the Online tables
directly, instead
Technical Details
Architecture
In the figure below, we can see how the Insight Landing database fits in the Advanced Analytics platform’s
architecture.
Page 78 | 335
Advanced Analytics Platform Technical Guide
Technical Components
Tables
dbo.ExtractList
The ExtractList is a configuration table in InsightLanding that contains a list of all source tables to be
imported into Landing from all source systems. The table has one record per source system table and is
used to configure which tables are imported into Landing, whether or not to rename them, how if at all the
source table should be filtered and how long to retain historical data for a particular table.
If Temenos Core banking is used as a source system, the content of the SourceSchema table in Insight
Import must be consistent with the content of ExtractList.
Page 79 | 335
Advanced Analytics Platform Technical Guide
Page 80 | 335
Advanced Analytics Platform Technical Guide
dbo.ExtractSourceDate
The ExtractSourceDate table is used to store the queries that are used to retrieve the current extract date
from the source system data. One record for each source system defined in the Extract List table is required.
The date returned from the query will be used to create the date portion of the schema used for each table
stored in InsightLanding. If a source system makes use of both batch and online extraction for its tables,
this source system will have two entries in ExtractSourceDate, one with the OnlineProcess flag set to 1 and
the other with this flag set to 0 or NULL. The business date for online extraction will be the business date
used for batch extraction plus one business day.
Please note that Dates tables being used to control ETL dates for each source systems. These Dates tables
need to be configured for batch processing and they are built by APIs.
Page 81 | 335
Advanced Analytics Platform Technical Guide
<Source Name>.SourceDate
The SourceDate table is used to store all the business dates for which data has been loaded into
InsightLanding for each source system used. One SourceDate table for each source system defined in the
Extract List table is required e.g. BS.SourceDate for Temenos Core Banking, Budget.SourceDate for Budget
etc. Each row in the table represents one individual business date within a specific source system from the
least to the most recent. The dates stored in this table will be referenced by the
s_InsightLanding_CSI_Purge stored procedure during the purging process.
dbo.MaxString
This table, introduced in R18, contains column values which contain size more than 4000 characters. The
views will use CLR function to get the values from this table and populate in those respective columns.
Page 82 | 335
Advanced Analytics Platform Technical Guide
dbo.MaxStringCounter
This table, introduced in R18, contains column values which contain size more than 4000 characters. The
views will use CLR function to get the values from this table and populate in those respective columns.
dbo.T24Entities
The T24Entities and the T24Attributes tables have been introduced from Release 2017 to comply with the
regulation requirements defined by BCBS 239 (Basel Committee on Banking Supervision's regulation
number 239), with subject title “Principles for effective risk data aggregation and risk reporting”.
These two tables are used to store in InsightLanding, respectively, the description of each table and the
description of each column imported from Temenos Core banking. This information is extracted from
Temenos Core banking’s help text files and is, therefore, consistent across the two systems.
The structure of the T24Entities table is covered below, together with a description of all the columns in
this table.
dbo.T24Attributes
The T24Attributes table, as previously mentioned, stores the description of each column imported from
Temenos Core banking. This information is extracted from Temenos Core banking’s help text files.
The structure of the T24Attributes table is covered below, together with a description of all the columns in
this table.
Page 83 | 335
Advanced Analytics Platform Technical Guide
dbo.ViewMetadata
ViewMetadata is an Internal Table available for debugging data typing/profiling on columns.
dbo.DQSummaryLog
This table stores provides logging facilities for all InsightLanding tables that were subjected to Data Quality.
Page 84 | 335
Advanced Analytics Platform Technical Guide
dbo.SystemParametersLanding
This table is a Data Manager Custom Table rule that where system parameters that are specific to the
InsightLanding database are defined. As discussed in the Rules Engine section, the types and values for a
given system parameters are mapped. When the rule creation steps are run by an agent job, the
s_CreateRuleGroup procedure creates this table, hence all columns in this table are populated
automatically.
Views
The Insight Landing database provides abreaction views to select data from all batch tables loaded in the
landing database and use the same schema name as the columnstore index tables. MIS_DATE filter for
business date of source data should always be used.
If Flexstring load is done on a table for the BS source system the views will join the columnstore index
tables to the MaxString tables to present all fields including the raw full strings that are longer than 4000
characters.
A view will be created for each data source in Landing with all the business dates with data. The syntax
for these views will be <Source System > .v_BusinessDate. E.g. for the Core banking data source, the
view will be called BS.v_BusinessDate. This can be used to filter reports that access InsightLanding tables
directly.
From R19, it is also possible to build online views in InsightLanding. Online views will only be created if the
corresponding table definition in ExtractList has been set as OnlineProcess = 1. Landing Online views will
be a full set of all Extract List tables (including Batch only). They will include a union of online deltas and
previous business date batch data. It should be noted that it is important to pay attention to MIS_DATE
when doing joins.
Page 85 | 335
Advanced Analytics Platform Technical Guide
Steps
14. Check if source date is defined in InsightETL, if not insert new date
15. Check if a schema exists for source system for data being loaded. If not create it.
16. Create new Clustered Columnstore Index tables in the schema if they do not exist. Create also the
corresponding view by calling s_InsightLanding_CSI_Views_Create
17. Load data for current ETL date. If there is data already delete first and then reload
18. If data load encounters column values greater than the largest supported type a process is
triggered to move the column to row store (MaxString tables). The view for the table is updated
to join both the base Columnstore Index table and the lookup table making the split seamless to
the end user
19. Call s_InsightLanding_CSI_BusinessDateViews_Create to create a view with all the business dates
with data for the source system.
Inputs
Source Name – Used to pass the name of the source system to be imported into Insight Landing.
This must match one of the Source Names defined in the Extract List table. Example: BS
BS Date – Supply a query that returns the current business date from InsightETL. Typically Select
BusinessDate from InsightETL.dbo.CurrentDate.
TableToLand – Supplies the name of the table to be loaded in InsightLanding. Acceptable values
are the table name or the keyword ‘ALL’
User Id – Supply the schema name for the default Insight Landing schema. This should be the
same as in Extract List and is almost always dbo.
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs)
OnlineProcess – defines if online processing should be enabled. Default value is 0, that means
regular batch ETL
PartitionSchemeName - Name of the partition scheme. Should be left null for default one. The
database administrator can use their own existing partition scheme if other than the default one
Page 86 | 335
Advanced Analytics Platform Technical Guide
dbo.s_DQSummary_Load
Description
This stored procedure deletes the content of the DQSummaryLog table and populates it with new DQ-
related statistics.
Inputs
@MIS_Date – Stores the Current ETL Date
dbo.s_InsightLanding_CSI_Views_Create
Description
Removal of dated schemas means all daily loads of data from source system will be stored in a single table.
Also, the schema where tables and views are contained never changes once created.
Core Banking multi-value fields can be larger than the biggest supported string type (NVARCHAR4000) and
would need to be stored as MAX data type which cannot be kept in columns store index. When data load
encounters column values greater than the largest supported type a process is triggered to move the
column to row store (MaxString tables). This store procedure creates reporting views by joining both the
base tables and the lookup table making the split seamless to the end user.
Since the schemas in Columnstore index remains for the most part static these views rarely need to be
recreated with the exception of those cases when new columns are added or a string in column grows over
the biggest supported string type on the source system being loaded.
Note that only columns needed should be returned in the result set.
Steps
20. Called by s_InsightLanding_CSI_Table_Update.
21. Creates a temp dataset using metadata with list of columns for table. If column for table is in
MaxStringCounter lookup table then a function is used to replace string with hash table values with
original raw value from lookup table
22. Drops view if already exist
23. Create a view based on the list of columns in temp dataset.
Page 87 | 335
Advanced Analytics Platform Technical Guide
Inputs
TableName – The table for which a view must be created
SourceName – Used to pass the name of the source system to be imported into Insight Landing.
This must match one of the Source Names defined in the Extract List table. Example: BS
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
dbo.s_InsightLanding_CSI_Table_Reset
Description
This procedure is used to reset Clustered Columnstore Index in InsightLanding back to is original state
when first deployed.
The schemas definitions in Columnstore Index are very static thus if a new source data set is to be loaded
that has changed significantly from its first loads it is best to start fresh to avoid any potential conflicts with
data. This is the case when data from a different Temenos Core Banking environment is extracted where
STANDARD.SELECTION metadata definitions may be different.
Note that this store procedure is to be run during the implementation of the solution as it will delete all
historical data loaded in Landing.
Steps
1. Builds a temp dataset with list of tables and views to be dropped
Inputs
Source System – The name of the source system that you are resetting Columnstore index data
for. This must match one of the source system names defined in Extract List. Default value is “All”
TableName – The table for which a view must be created. Default value is “All”
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
Page 88 | 335
Advanced Analytics Platform Technical Guide
dbo.s_InsightLanding_CSI_Purge
Description
This stored procedure is used to purge InsightLanding data within a certain date range when columnstore
index (CSI) is in place and to rebuild the CSIs for the remaining data. It replaces the previously used
s_InsightLanding_Purge and s_InsightLanding_RangePurge stored procedures that can purge tables when
only rowstore indexes are in place.
s_InsightLanding_CSI_Purge can be used on an individual table or on all the tables landed from a specific
source system. The procedure first populates the ##LandingWorkTableCsiPurge temporary table with all
the T-SQL statements required for purging the tables selected and, in case, for rebuilding their indexes,
then it executes these statements and updates the logs. In fact, s_InsightLanding_CSI_Purge can run in a
purging or “Review only” mode: the former means that the stored procedure will actually delete data from
the tables selected, while the latter means that the procedure only updates ##LandingWorkTableCsiPurge
without actually executing the statements that are stored in it.
The major R19 improvement of s_InsightLanding_CSI_Purge is that this stored procedure now uses multi-
threaded deletion per table, i.e. each of the threads assigned to s_InsightLanding_CSI_Purge will delete
the content of a certain table, so that multiple tables can be purged in parallel. T-SQL statements for index
rebuilding, instead, are multithreaded internally by SQL engine hence no further Temenos-designed
multithreading is necessary. Another important enhancement within this stored procedure is that
s_InsightLanding_CSI_Purge executes data deletion in chunks of 100,000 rows to improve multi-threaded
processing performances. Multi-threading is, in fact, a resource-intensive operation – therefore, the
deletion-by-chunks within each thread helps to reduce the overall resource consumption. The most
apparent direct result of this features is that it effectively limits the consumption of the temporary space
for SQL transactions, with almost equal or sometimes even faster speed. Furthermore, it reduces the disk
space used and the size of the log file.
It should be noted that s_InsightLanding_CSI_Purge always relies on the default compression level from
the first partition considered and does not support using different compression levels on multiple partitions.
Furthermore, even though there are normally two available approaches to defragment indexes in SQL i.e.
Rebuil or Reorganize, Rebuild is the only option offered by default in the current stored procedure, as this
option has proven to perform faster. Locally developed statements that use the reorganize option, however,
can be designed by setting the @OutputForReviewingOnly input parameter to 1 and by subsequently
modifying the standard index rebuild statements stored in the ##LandingWorkTableCsiPurge temporary
table that will be discussed in the next section.
This table is dropped and repopulated whenever s_InsightLanding_CSI_Purge starts and dropped again
once all statements in it are completed successfully, unless s_InsightLanding_CSI_Purge is executed in
“Review Only” mode.
11
##LandingWorkTableCsiPurge is the name of the table in a single tenant environment. In a multi-Tenant
installation, the table name will change depending on the Tenant's name, as for the databases’ and agent
jobs’ names, and it will follow the ##TNT_LandingWorkTableCsiPurge pattern, where TNT is the tenant’s
name e.g. ##Tenant1_LandingWorkTableCsiPurge.
Page 89 | 335
Advanced Analytics Platform Technical Guide
Steps
Checks batch-related inputs and starts the batches required to manage processing
Validate other input parameters
Gets the most recent date landed from the appropriate <Source System>.SourceDate table
Drop the content of ##LandingWorkTableCsiPurge, if any
Log the T-SQL statements and the other attributes of the new purging process to
##LandingWorkTableCsiPurge. The statements will be used to:
Retrieve the default values of purging dates range from
InsightLanding.dbo.SystemParametersLanding. The general purging dates range
defined by this table can be overwritten if a value exist in the PurgeOlderThan
column of dbo.ExtractList for the specific table to be purged.
Select the Compression Level from the first partition as the new compression level
for rebuild
Delete data from appropriate tables in chunks of 100K rows each and update the
row count
Rebuild the columnstore index if required
Execute tables’ data deletion in parallel
Log known error(s) including possible threading error(s) caused by the ActionQuery submitted and
unexpected CLR error
Log the processing detail for each table
Remove dates within the purging range from the InsightETL.dbo.SourceDate, if asked to do so in
the @PurgeSourceDate input parameter and only if CLR ran deletes successfully
Flag dates as purged in the InsightLanding.<Source System>.Sourcedate
Rebuild the columnstore index if asked so in the @RebuildCSI input parameter
Drop the content of the ##LandingWorkTableCsiPurge global temporary table
Update dbo.EventLogDetails and finish (batch stopped)
Inputs
The s_InsightLanding_CSI_Purge procedure includes the following input parameters.
Page 90 | 335
Advanced Analytics Platform Technical Guide
SourceName - The value of this input parameter, that uses data type NVarChar(128), is the name
of the source system that is used as table schema name, such as: 'BS' or 'CRM'
KeepMonthEnd - This input parameter defines if you wish to keep month-end data for the table
or tables you are about to purge. Acceptable values are 0 or 1 (1 Bit data type).If set to 1, the
purge stored procedure will delete all the content for the selected tables in the selected dates range
except for the month's ends. If set to 0, the month-end data will also be deleted, too.
TableToPurge - This input parameter, that uses data type NVarChar(128), contains the name of
the table to be purged without the schema name or the keyword 'ALL' to signify that all tables
should be purged. The default value is ‘ALL’.
PurgeSourceDate - This input parameter defines if dates within the purging range will be
removed from the InsightETL.dbo.SourceDate. Acceptable values are 1 or 0. When this parameter
is set to 1, the dates affected by the purging will be removed by the InsightETL.dbo.SourceDate
table. When 0, is selected, instead the hystorical information will be kept in
InsightETL.dbo.SourceDate. The default value for this parameter is 1.
RebuildCSI - This input parameter defines if the columnstore indexes will be rebuilt at the end of
the purging process. This compresses the rowgroups and improves overall query performance.
Acceptable values for this parameter are 1 (i.e. the procedure rebuilds the columnstore index after
rows have been deleted) or 0 (i.e. indexes are not rebuilt) and the default value is 1.
BatchNum - This input parameter stores the current batch number and its default value is null. If
null, the stored procedure will automatically select the most recent active batch from
InsightETL.dbo.Batch and assign this to the thread
TotalThreads - This input parameter stores the total number of threads used by the procedure
and it is automatically determined if the values assigned is null or less than 1 (i.e. the default); in
this case, server's maximum number of schedulers will be used as the number of worker threads
OutputForReviewingOnly - This input parameters determines if the stored procedure is
executed for purging tables or to just review its potential results. As previously highlighted, in fact,
the purging stored procedure stores all statements to be run a temporary table called
##LandingWorkTableCsiPurge before they are executed. @OutputForReviewingOnly will control
whether the purging stored procedure stops at the step in which ##LandingWorkTableCsiPurge is
populated or continues and executes the statements logged in this temporary table. Acceptable
values for this input parameter are 1 or 0. If 1 is selected, the stored procedure will not execute
the T-SQL delete statements, and will not rebuild the columnstore indexes. Instead, the review
contents produced are output to a global temporary table for reviewing purposes. If the value
selected is 0, data deletion and index rebuilding are executed as per the parameters above.
It should be noted that s_InsightLanding_CSI_Purge does not include any input parameter to define the
dates range to be considered for the purging as this can be parameterized globally in
v_SystemParametersLanding in InsightETL as part of the Data Manager rule definition or specified in the
dbo.ExtractList configuration table for individual tables. As mentioned in the steps section, values defined
in ExtractList will supersede the definition in v_SystemParametersLanding.
This procedure is used to force all of the rowgroups into the columnstore, and then to combine the
rowgroups into fewer rowgroups with more rows. The ALTER INDEX REORGANIZE online operation also
removes rows that have been marked as deleted from the columnstore index tables.
Clustered Columnstore indexes will be reorganized in a database when any of the following criteria is met:
Compress all open rowgroups if rows in individual rowgroup or sum of rows for index is greater
than supplied value
Reorganize if rowgroup less than supplied value
Reorganize when more than supplied % of rows have been deleted
Reorganize when any segments contain more than supplied % deleted rows
Reorganize if more than supplied number of segments are empty
Steps
1. Create temp metadata of all clustered columnstore indexes in database
2. Create temp metadata of all OPEN rowgroups that met condition to be compressed
3. Create temp metadata of COMPRESSED rowgroups that met condition to be merged
4. Create temp metadata of rowgroups where deleted rows have met conditions to be removed
5. Issue ALTER INDEX REORGANIZE to compress all open row groups or to merge compressed row
groups
6. Issue ALTER INDEX REORGANIZE to remove deleted rows in the compressed row groups
Inputs
DatabaseName – Columnstore Index has been implemented in InsightLanding and
InsightWarehouse databases. Specify either database name
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
Page 92 | 335
Advanced Analytics Platform Technical Guide
dbo.s_InsightLanding_Purge (non-CSI)
Description
This procedure is used to purge any tables from InsightLanding that exceed the number of dates that
should be stored as configured in the Extract List table in the Purge Older Than column starting from the
date provided as an input parameter. From R18, this stored procedure will only be used if a client decides
not to avail of Columstore indexes (CSI).
Once all tables for a particular Landing schema have been purged then the schema itself is removed.
Steps
7. Find schema for date if it exists
8. Remove all tables from schema where range from schema date to purge date exceeds purge older
than value
9. Remove schema if no tables are left in schema
Inputs
Source System – The name of the source system that you are purging data for. This must match
one of the source system names defined in Extract List.
Date – Date that should be included in the check to see if dates should be purged
dbo.s_InsightLanding_RangePurge (Non-CSI)
Description
This procedure is used to purge any tables from InsightLanding that is older than a certain date (defined
in the PurgeOlderThan column of the in the ExtractList table), by internally calling s_InsightLandingPurge.
In addition to this, s_InsightLandingRangePurge also allows you to pass a range of dates which it will loop
through, using each date to compare if any table extracts are older than their defined retention period.
Any dates older than the start date parameter that is passed to this procedure will be excluded from the
check and will not be removed during the purge process. This is done in case a bank needs to retain non-
month-end daily extracts that exceed the range set in the PurgeOlderThan column in ExtractList.
From R17, the procedure that purges Landing data will use the new BSEndofMonth flag in
InsightStaging..BSSourceDate instead of the calendar month-end, when purging daily data. From R18, this
stored procedure will only be used if a client decides not to avail of Columstore indexes (CSI).
Once all tables for a particular Landing schema have been purged then the schema itself is removed.
Steps
1. Verify if date is a month end date and if it should be retained or not
2. Exec s_Landing_Purge to drop all tables where PurgeOlderThan value in ExtractList is not
null and the difference between the latest date in Landing and the current date being processed is
greater than the PurgeOlderThan value.
3. If all tables in the schema have been dropped then drop the schema
Page 93 | 335
Advanced Analytics Platform Technical Guide
Inputs
Start Date - Earliest date that should be included in the check to see if dates should be purged.
Any date older than this date will be excluded from the check and retained. Pass a valid date format
that is correct for the collation being used. Ex YYY-MM-DD for default collation.
End Date – The most recent date that should be included in the check to see if dates should be
purged. Pass a valid date format that is correct for the collation being used. Ex YYY-MM-DD for
default collation.
Delete_EndOfMonth – If set to Yes then month ends will not be excluded from the purge check
and will be deleted if they exceed the retention period defined in Extract List. This parameter
defaults to No. Syntax: 1 for Yes, 0 for No.
SourceSystem – The name of the source system that you are purging data for. This must match
one of the source system names defined in Extract List.
dbo.s_InsightLanding_Compression
Description
This procedure compresses the InsightLanding database as part of regular maintenance. The first time
s_InsightLanding_Compression is executed, this is done for all tables in all schemas but, since release 2017,
the store procedure will only compress tables that have not previously been compressed, thereafter. Page
compression is used for the data compression type.
This compression stored procedure is provided out-of-the-box with any Analytics platform, however, it
needs to be enabled and scheduled. It is recommended to run it at least on a weekly basis.
Steps
1. Rebuild all tables in schema with data compression set to page
2. Repeat for each schema
Inputs
Enable - If this input parameter is set to 1, the stored procedure will compress table at the page
level (DATA_COMPRESSION = PAGE). This input parameter can be set to 0 to disable compression
on a table
dbo.s_InsightLandingTable_Update
Description
This procedure was used in R17 and earier to bring in new source data into InsightLanding or to update
existing source data already stored in Landing within the Analytics ETL – it is still currently available also
for financial institutions using R18 that for any reason wish to use InsightLanding without the Columnstore
index option. The procedure needs to be called for each source system whose columns are archived in
InsightLanding.
Page 94 | 335
Advanced Analytics Platform Technical Guide
dbo.s_InsightLandingDateSchemaCombineViews_Create
Description
Stored procedure used in R17 and earlier releases to create combined views from daily loads of data from
source systems are stored in their own SQL schemas
Online. s_InsightLanding_Online_Table_Update
Description
This procedure is used to copy data from the InsightImport Online schema tables into the InsightLanding
Online Schema tables. It is called by the Online.s_ProcessImportOnline_Update procedure.
Inputs
Source Name – Used to pass the name of the source system to be process. In this case it is BS
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs)
Online. s_InsightLanding_Online_Views_Create
Description
This procedure creates the InsightLanding Online views. The views will be a union of intra-day and previous
business date batch. It is called by the Online.s_InsightLanding_Online_Table_update procedure.
Inputs
Source Name – Used to pass the name of the source system to be process. In this case it is BS
Table Name – Name of table for which a view is to be created. Acceptable values are the table
name or the keyword ‘ALL’
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
Online. s_InsightLanding_Online_EOD
Description
This procedure processes the Online EOD. It uses the InsightLanding Online views to copy data to the BS
Landing schema tables in preparation for the batch ETL end of day. It is called by the
Online.s_ProcessImportOnline_Update procedure.
Inputs
Source Name – Used to pass the name of the source system to be process. In this case it is BS
Table Name – Name of table for which a view is to be created. Acceptable values are the table
name or the keyword ‘ALL’
Page 95 | 335
Advanced Analytics Platform Technical Guide
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
Functions
dbo.fn_DataTypeOrder
Description
This scalar-valued function determines a ranking number for a data type compared to the other data types.
Specifically, this function returns the order of the size of a data type so that columns can be ordered by
size and so that a size can be used that will not result in any implicit casting issues.
The idea is that a column with a lower ranking data type can be inserted into a column with a higher
ranking data type.
Steps
1. Rank data type based on the table below.
DataType Rank
bit 1000
tinyint 2000
smallint 3000
int 4000
real 5000
smallmoney 6000
money 7000
decimal 7500
float 7600
smalldatetime 8000
datetime 9000
datetime2 9500
char% 100000
nchar% 200000
varchar% 300000
nvarchar% 400000
Inputs
DataTypeName – The name of the data type. Eg. Int.
MaxLength – The length of the data type Eg. 30.
Precision – the number of digits in a number.
Scale – number of digits to the right of the decimal point in a number
Page 96 | 335
Advanced Analytics Platform Technical Guide
dbo.fn_FullDataType
Description
This function returns a full data type eg. nvarchar(120) given base type eg. nvarchar and max length.
Steps
Determine full datatype, eg. Nvarchar(30) given Nvarchar and 30.
Inputs
DataTypeName – The name of the data type. E.g. Int.
MaxLength – The length of the data type E.g. 30.
Precision – as described previously
Scale– as described previously
dbo.fn_DeFlexStringCLR
Description
This is a scalar-based function used to import MaxString content into InsightLanding views.
Configuration
Configuring ExtractList
The ExtractList table must be populated with a record for every source table that needs to be imported into
Landing. Any table that you need to bring into InsightWarehouse for analytical reporting and ad hoc analysis
needs to have a record in here. In addition to this, any source table that you want to store over time in
Landing to be used for reporting or another 3rd party system table needs to have an entry in Extract List.
By default, a certain pre-configuration for the Extract List table will be included in the Advanced Analytics
Platform. This configuration will handle a completely unmodified Temenos Core banking Model Bank
configuration. Minor exceptions are the RE_CRF_GL and RE_CRF_PL tables, which are often renamed per
Core banking installation and will have to be renamed in Extract List back to RE_CRF_GL and RE_CRF_PL.
The default configuration will assume these tables are titled RE_CRF_MBGL and RE_CRF_MBPL so you will
have to replace these values with your bank’s custom table names.
If core fields from any of these tables have been moved to a local table in your installation of Core banking,
then you will need to create a record to import those local tables. This is assuming this local table has been
configured in DW.Export and InsightImport. Also, any additional tables that have been added to
InsightImport or need to be imported from another 3rd party source system will need to be added.
In Extract List, you will need to list out all tables in InsightImport including the additional tables created as
part of the local ref, multi-value, and sub-value processing. You will notice some of these tables already
included in the default configuration for Temenos Core banking Model Bank.
You can enable or disable online processing for a table. You can disable the import of tables that you no
longer need or are not available but this is not mandatory. The Landing update procedure will not fail due
to missing tables and will carry on.
Page 97 | 335
Advanced Analytics Platform Technical Guide
InsightSource
Overview
InsightSource is used to combine source data from multiple source systems that may have different import
frequencies or source dates. It takes source system data with differing extract frequencies and makes one
combined copy of the most up to date data from each source system. This eventually becomes one business
date in InsightWarehouse.
Even if it is dealing with different import frequencies or source dates, InsightSource will treat data copied
from the InsightLanding database as if it were only one day of data. The content of InsightLanding tables
will be stored into InsightSource tables having the same name but with the date removed from the schema
(e.g. BS.CUSTOMER, BS.ACCOUNT, Budget.GLBudget etc.). Tables are stored in InsightSource grouped in
separate schemas for each source system (e.g. BS, Budget etc.).
Furthermore, transactions from different dates in Landing are combined here. E.g. if the Analytics ETL
process requires more than one day of transactions, the number of days to combine can be specified in
the ExtractList table in InsightLanding.
This database is volatile so, every time Analytics ETL is run, the data in it will be replaced with the latest
copy of source data from InsightLanding. InsightSource only stores one business date at a time, as
previously stated, so only the current business date being run will be populated.
From R17, ETL batch control and an upgraded multithreading process have been added to the process
updating this database and all logging has been redirected to dedicated tables in the InsightETL database.
Page 98 | 335
Advanced Analytics Platform Technical Guide
Technical Details
Architecture
In the figure below, we can see how the InsightSource database fits in the Advanced Analytics platform’s
architecture.
Page 99 | 335
Advanced Analytics Platform Technical Guide
Technical Components
SQL Stored Procedures
s_InsightSource_CSI_Update
Description
This procedure is used to import data from InsightLanding to InsightSource for one source system at a
time, with the most up to date data available from each source system being loaded into InsightSource.
The selection of the latest date for the business entity (i.e. Entity Date) requires a SQL statement in the
v_systemParameters view to determine which Landing date should be used for data processing.
Steps
1. Determine correct source date for the business date being processed
2. Check if data is available in InsightLanding for the source date
3. Remove all InsightSource object if all objects are being processed
4. Copy all source tables into InsightSource
5. Repeat for each source system
Inputs
Sources – The source system of the tables to be processed from InsightLanding (e.g. BS). One
source system schema per time should be used and null values are not acceptable.
BsDate – The business date currently being processed (see Configuration section)
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs)
s_InsightSource_CSI_UpdateAll
Description
This procedure is used to build a single set of data from all source systems with the most up to date data
available from each source system. To do so, this stored procedure internally calls s_InsightSource_Update
and uses the source dates defined in InsightETL for each source for the current business date being
processed.
InsightSource is volatile, so all tables in it are dropped and recreated for each full ETL run.
Steps
1. Exec s_InsightSource_Update for first source system
2. Determine correct source date for the business date being processed
3. Check if data is available in InsightLanding for the source date
4. Remove all InsightSource object if all objects are being processed
5. Copy all source tables into InsightSource
Inputs
BsDate – The business date currently being processed. This date is used to look up the respective
source dates from the InsightETL Source Dates table.
s_InsightSource_Update (deprecated)
Description
This procedure was used, in R17 and earlier releases, to import data from InsightLanding to InsightSource
for one source system at a time, with the most up to date data available from each source system being
loaded into InsightSource.
This stored procedure used the source dates defined in InsightETL for each source for the business date
being processed. The InsightSource database is volatile so all tables in it were dropped and recreated for
each full Analytics ETL run.
Steps
1. Determine correct source date for the business date being processed
2. Check if data is available in InsightLanding for the source date
3. Remove all InsightSource object if all objects are being processed
4. Copy all source tables into InsightSource
5. Repeat for each source system
Inputs
Sources – The source system of the tables to be processed from InsightLanding (e.g. BS). One
source system schema per time should be used and null values are not acceptable.
BsDate – The business date currently being processed. This date is used to look up the respective
source dates from the InsightETL Source Dates table.
BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs)
s_InsightSource_UpdateAll (deprecated)
Description
This procedure was used, in R17 and earlier, to build a single set of data from all source systems with the
most up to date data available from each source system. To do so, this stored procedure internally called
s_InsightSource_Update and used the source dates defined in InsightETL for each source for the business
date being processed.
InsightSource is volatile, so all tables in it were dropped and recreated for each full ETL run.
Steps
1. Exec s_InsightSource_Update for first source system
2. Determine correct source date for the business date being processed
3. Check if data is available in InsightLanding for the source date
4. Remove all InsightSource object if all objects are being processed
5. Copy all source tables into InsightSource
6. Repeat for each source system
Inputs
BsDate – The business date currently being processed. This date is used to look up the respective
source dates from the InsightETL Source Dates table.
s_InsightSource_Synchronize_Schema
Description
This procedure is used when a data change that affects mapping has been made and there is need to
process data both before and after the change. Usually, this would require the bank to change the mapping
involved depending on which date is being processed. Instead, code is added to this procedure to account
for the change and reflects that in the source tables before mapping into InsightWarehouse occurs in
InsightStaging.
This procedure is empty, to begin with and custom code will need to be added to deal with the change in
data. A simple example of where this may be employed is when mapping has changed as in a core banking
system field has been moved or added. If a field has been added (and included in the source data mapping
in InsightStaging) then reprocessing any date before this change will fail as the mapping will have been
updated to accommodate this new field.
This procedure is not being called by the default InsightETL job and will need to be added once it is
required.
Configuration
InsightSource does not require any direct configuration and hence there are no configuration tables
present. However, it requires that valid source dates are already entered into InsightETL for the current
business date being processed. This update should be included as part of the InsightLanding procedures
or through a manual edit of the Source Date table in InsightETL.
InsightSource also requires that the data for each source date for each source system is present in
InsightLanding.
'MonthEndDate' and 'CustomDateSQL' – if the DateSelection entry is not specified, the latest SourceDate
in InsightLanding will be taken in consideration12.
The 'EntityDateSQL' assumes that COBs are run on different days but it is preferable to run Analytics
ETL everyday with the most recent data of each entity. The latest SourceDate selection is the default
and all data is in the Landing table for this business date
‘LastMonthDate’ uses the last date of the previous month that has data
‘PreviousDayDate’ uses 1 day before the current date
‘MonthEndDate’ uses the last date of the current month
‘CustomDateSQL’ requires a SQL statement in SystemParamters to find the date for a source
We should check the v_systemparameters view or its corresponding table to verify what sort of date
selection will be used before we execute the stored procedure. The latest SourceDate will be used if no
'DateSelection' entry is defined in v_systemParameters
InsightSource can be configured to run as part of the Analytics ETL SQL agent job or ran manually via the
stored procedures and their parameters. The required stored procedure as part of the ETL job can be either
of the two listed below.
s_InsightSource_CSI_UpdateAll (bsDate)
s_InsightSource_CSI_Update (Sources, bsDate, BatchNum, TotalThreads)
The first option is selected if we want to have all tables for all source systems imported into InsightSource
at the same time. The second option is used if we want to have separated import for separated source
systems, e.g. to ease logging and troubleshooting. For example, if we have BS and Budget source systems
and we want to have separated InsightSource updates we can use the following code split into two steps:
USE InsightSource
12 SQLEntityDate uses Landing views, all other options use Landing tables for the select into source data.
Page 103 | 335
Advanced Analytics Platform Technical Guide
InsightETL
Overview
InsightETL database includes the backend tables used for the definition of business rules to categorize
data, create bands, create datasets, custom tables and perform calculations, the control of logging tables,
performance enhancements and multi-threading. In addition to this, InsightETL has several additional
features that will be discussed below.
This database replaces both the InsightMasterData and Insight databases, that were present in pre-R18
releases, and incorporates all their functionalities.
Rules Engine
GDPR
The General Data Protection Regulation (GDPR) (EU) 2016/679 is a regulation in EU law on data protection
and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). It
also addresses the export of personal data outside the EU and EEA areas. The GDPR aims primarily to give
control to citizens and residents over their personal data and to simplify the regulatory environment for
international business by unifying the regulation within the EU. This regulation grants customers the right
to have their details erased from a bank’s database, including the Advanced Analytics Platform’s. A new set
of configuration tables have been included in InsightETL to configure GDPR erasure and a dedicated chapter
of this document will discuss this new feature thoroughly.
Data Quality
As previously discussed in the chapter dedicated to the InsightImport database, Data Quality Import (DQI)
is a new feature added to Analytics ETL and Process Data ExStore Import in R18 release. Its smallest
operation unit is at field level. Once configured, tables in InsightImport are subject to data quality check;
and if necessary, in-place correction will be made to the fields according to DQI rules.
Except for the hard-coded Failover rules, DQI rules are defined in the InsightETL.dbo.RuleDataQuality table.
This table, together with all the new DQI-related functions, will be discussed further on in this chapter.
The toolkit can also be used to control how temporary tables are materialized from v_source views during
the Extract and Transform phases of Analytics ETL. Even in this case, its purpose is to boost performance
in various views.
In addition to these toolkit functionalites, InsightETL serves as centralized logging location, where a single
table stores information about any Analytics ETL process. Secondly, this database hosts the detailed logging
table used by multi-threaded stored procedures in InsightImport, Landing and Source. To end with, it also
stores the two separated detailed logging tables used by the core ETL processing in the InsightStaging
database and relying on a different type of multi-threading.
Multi-threading
As previously mentioned, all core ETL stored procedures in InsightImport, InsightLanding, and
InsightSource internally all invoke the s_ParallelExecActionQueriesTx stored procedure in the Insight
database, which is the heart of the new parallelization process.
s_ParallelExecActionQueriesTx is a SQL CLR procedure which orchestrates multi-threading and centralized
logging in Analytics for the three databases above. The outcome of the logging process, however, is not
stored in the Insight database but in the EventLog and EventLogDetails tables of another database called
InsightETL, which is also discussed in this technical guide. We will see more detailed information about this
stored procedure as we proceed.
Manages Datasets N of business rule can be easily added to set the values of multiple fields.
Data Quality Import A table is maintained to define rules for Data Quality Import
Import Configuration
Maintain System ETL A record of the current ETL date and dates for previous ETL runs per
Dates source system are maintained in InsightETL.
Defines report label and A table is maintained to define attributes labels in reports and can be
enables multi-language configured for multi-language settings
Centralized Logging InsightETL stores the EventLog table which provides centralized logging for
any ETL Analytics process in any database.
In addition to this generic logging table, InsightETL hosts EventLogDetails
table. This is where the stored procedures and functions responsible for
enhanced multi-threading will store a very detailed log for each task
initiated and/or completed by a thread, within the initial part of Analytics
ETL in InsightImport, Landing and Source. This database also hosts the
logging tables for Batch control.
InsightETL also contains StagingEventLog and StagingEventLogDetails
table. These are the tables where the stored procedures and functions
performing the Extraction, Transformation and Loading of InsightStaging
and InsightWarehouse tables will store a detailed log for each task initiated
and/or completed by a thread.
The EventLog table is referenced by EventLogDetails and also by
StagingEventLog.
Technical Details
Architecture
InsightETL business rules are incorporated into the Analytics ETL data flow during the Extract and
Transform phases of the core Analytics ETL, which is carried out in the InsightStaging database.
InsightStaging rules – as per what specified in the rule definition but normally according to the following
principles:
Dimension Data
o At the end of the extract – after source tables have been merged into the staging table
o At the beginning of the transform – before Dim table is created
Fact Data
o At the end of the transform – after Fact table is created
These updates are driven by the Rule engine-related tables (more details will be provided later in this
section).
Technical Components
InsightETL consists of tables, views and stored procedures critical that add data, customize and control the
Analytics ETL process.
Tables
dbo.RuleDefinitions
This table stores definitions for business rules for each installation. It replaces the now deprecated
InsightMasterData.dbo.CustomColumn table that was used to manage customization rules in pre-R18
releases.
RuleDefinitions has the details of the column(s) being added and key columns that they are based on. E.g.,
if a new Product hierarchy column can be based on a composite of product code columns. It also stores
custom SQL code that can be inserted into the Analytics ETL process.
As previously mentioned, there are five types of rules that can be added to a database using Data Manager,
i.e. Lookup, Banding, Calculation, Dataset and CustomTable. The rule type is specified in the Operation
column of the RuleDefinitions table.
Operation Type of rule defined. Acceptable values are Lookup, Banding, Calculation,
Split, MaxColumn and Dataset.
SourceSQLStatement This the source SQL statement used by the business rule if the Operation
selected is Dataset.
DataSetDescription Descriptive information about the source SQL statement, if populated.
SourceStoredProcedure Reserved for future use.
Name
DatasetId Reserved for future use.
CreatedBy Name of the user or Process that created the Business rule.
CreatedDate Date in which the Business rule was created.
LastModifiedBy Name of the user or Process that last modified the Business rule.
LastModifiedDate Date in which the Business rule was last modified.
BusinessDateFrom Date from which the Business rule becomes active. This date is set to
1900-01-01 but this can be modified for temporal metadata (future use.).
BusinessDateTo Date until which the Business rule remains active. This date is set to 9999-
12-31 but this can be modified for temporal metadata (future use).
IsActive If set to 1, this flag makes the rule active. If set to 0 or empty, the rule is
inactive.
IsCurrent Reserved for future use.
IsPublished Reserved for future use.
SourceTableName Reserved for future use.
SourceTableSchemaName Reserved for future use.
dbo.RuleColumns
This table stores the details of the columns populated by the business rules.
dbo.RuleValues
This table is used to store the values of the virtual lookup table for each Operation = Lookup rule. It
performs a role similar to CustomValue’s in pre-R18 releases.
dbo.RuleExpressionLevels
This table stores rules expression levels and is used for Banding only. The RuleExpressionLevels table has
the following columns:
dbo.RuleFilters
This table is designed to store optional filters for rules with Operation = ‘MaxColumn’ only. It will be used
for future development.
dbo.RuleCustomers
This table is used for GDPR configuration and it contains the details of customers who have erasures
initiated, in progress or have completed erasures.
dbo.RuleCustomersRuleColumns
This table is used for GDPR configuration. It represents an intersection between RuleCustomers and
RuleColumns. It contains the erasure date (ActionDate) for a particular column/ customer combination, as
well as the erasure replacement value.
dbo.RuleReplacements
This table is used for GDPR configuration. It contains replacement values based on datatype and table type
(dimension or other) for Analytics (InsightWarehouse) erasures.
dbo.CDPPurpose
This table is used for GDPR configuration. It contains retention periods for difference column purposes.
Used to calculate the ErasureDate of a column. Only necessary for Analytics (InsightWarehouse) erasures.
dbo.RuleDataQuality
This table is where all the DQI (Data Quality Import) rules are defined, except for the hard-coded Failover
rules. Each row in this column corresponds to a rule defining how to handle the replacement of a column
with a specific data type.s
Each rule is applied to a column, namely either on its data type or on its value. There are four types of
configurable DQ rules, listed in the order of high to low precedence as follows:
1. RegEx rule;
2. Equal rule;
3. General rule;
4. Default rule.
RegEx, Equal and General rules are defined to match on data values but Default and Failover rules are set
for data types. Default rules are pre-installed during the setup, with RuleColumnId always equals to -1 (see
below).
Hard-coded rules are specially designed as the last means to automatically revive a failing Analytics ETL.
Failover rules remain inactive until in the rare event that all other DQI rules, even the Default ones, are for
some reason unavailable. Beside the Failover rules, there are also a set of hard-coded special General rules.
The former are deployed in memory with binary CLR code and the latter are deployed to RuleDataQuality
through the s_MergeKeyColumnDQRules procedure to ensure all the primary-key-to-be columns are never
imported with nulls.
Except the special General rules enforcing primary key columns to be non-null, any other DQI Replacement
value defined in the RuleDataQuality table can be customized to client’s preference.
Due to the SQL_VARIANT type that Replacement columns employ, a special syntax with explicit
CAST/CONVERT must be used, otherwise the text directly typed into table is always interpreted literally as
NVarChar.
To end with, it should be noted that multiple DQI rules can be defined for a column. Rules are chained in
series to apply when checking and revising the column considered. Rules having the same precedence level
are randomly applied one after the other until no more DQI issues are detected. If all the rules with the
same precendence are applied but further DQI errors are detected, the next level of rule with lower
precedence will activate to try to resolve the issue in the problematic column, if more rules are defined.
For a RuleDefinitions record to be correctly associated by the DQI process with the corresponding data
quality rules, it must have the following columns’ attributes set:
DatabaseName = ‘InsightImport’;
SchemaName and TableName are set to the target table;
Operation = ‘DataQuality’ or ‘Data Quality’;
IsActive = 1;
IsCurrent = 1.
For DQI to recognize the relevant data quality-related entires in RuleColumns, they must have the following
columns’ attributes set:
Important Note: RuleDataQuality is very sensitive and will get corrupted if not
updated properly. Replacement for bad data with default values defined in this table.
We can’t directly edit this table as it is having sql_variant and computed columns but
only insert/update using SQL script with target type converted
dbo.AttributeCalculations
This system configuration table is used to define string operations (i.e. split, calculations and dataset
creation) in columns processed during Analytics ETL/Process Data ExStore. Operations in this table can be
used to both improve performances of Analytics ETL by splitting up compound column values and to create
brand new columns with the aid of T-SQL functions or stored procedures.
For example, AttributesCalculation can be used to define that the compound id of a LIMIT record should
be split into three parts to ease the identification of the customer @Id in the record (which is part of the
limit id but not part of the limit record), then place the extracted Customer @Id in a separated column in
LIMIT.
Or, e.g., AttributesCalculation can be used to create a new column in the ACCOUNT table which results
from the calculation of the limit reference number from compound LIMIT.REF field.
AttributeCalculations is an optional table and will be only made available if the size of a client’s database
requires it.
Even though AttributesCalculations is capable of defining working splits, calculations and datsets in both
R17 and R18 releases, from R18 release it is recommended to use this table only for the local development
of Splits. Locally developed Datasets and Calculations should be defined using the Data Manager (AKA
Rules Designer) functionality of the Analytics Front End Web application, instead. The details of these
Dataset and Calculation rules will be stored in the back end through the Rule Engine-related tables of
InsightETL.
ColumnName Description
DatabaseName Name of the database hosting the table in which the
operation or calculation or dataset calculation is done.
SchemaName Schema name of the table in table in which the
operation or calculation or dataset calculation is done.
TableName Name of the table in which table in which the
operation or calculation is done.
ColumnName The use of this column depends on the value of the
Operation column associated with it:
For Split operations – Name of the column in which
the split operation is done
For Calculation operations – Not Used
For Dataset operations – Name of the column
storing the results of the Dataset’s stored procedure
ColumnOrder Position of the column subject to operation or
calculation in its table (e.g. 1 for @Id in CUSTOMER
table)
Operation Specifies if the entry defined is used for a Split or a
Calculation or a Dataset
- Split - the operation defined in this entry will
split the compound value of a specific column
in a tabled on the delimiter and add the
resulting values as new columns, appended
at the end of the table.
- Calculation – the current entry defines a
calculation which can be performed on one or
multiple columns. The resulting value can
either be used to overwrite the old value of a
column in the table or be appended as new
column at the end of the table
dbo.Indexes
This configuration table is used to create additional indexes in tables of the InsightSource database,
especially indexes on foreign keys.
If we are using complex v_source views on large datasets, the core Extraction and Transformation phases
of the Analytics ETL – in charge of performing string manipulation and source to target mapping –can run
into performance issues. This generates the need for creating additional indexes in the InsightSource
database that can be configured through this table (even though, it can be potentially used also to create
indexes in other databases, e.g. Landing).
ColumnName Description
TableName Name of the table for which the index is created.
E.g. ACCOUNT
ColumnName Name of the column for which the index is created.
E.g. Categ_Entry
ColumnOrder The order of the column in the index that will be
created.
E.g. 1
IndexNumber If set to 1, all the columns in a table will be in one
index. Else, multiple indexes will be created in the
same table.
E.g. 1
IndexType Type of index, i.e. CLUSTERED or NONCLUSTERED.
The default value is NONCLUSTERED.
E.g. NONCLUSTERED
ColumnUsage Defines if the column usage as an index and
acceptable values are:
- Equality: for CLUSTERED or
NONCLUSTERED key columns
- Include: to be used if a nonclustered index
is extended by including non-key columns in
addition to the index key main columns.
- NONCLUSTERED
E.g. Equality
IndexSortOrder Sorting order for the index. E.g Asc or Desc.
DatabaseName Name of the database hosting the table for which the
index is created.
E.g. InsightSource
SchemaName Schema of the table where the index will be created.
E.g. BS
UniqueIndex Defines if the index is unique or not, depending on
the index type. Acceptable values are 1 (meaning
Yes) or 0 (meaning No).
Configuration Defines the type of configuration for this entry.
Acceptable values are:
- Framework: which means it has been
added by the TFS Framework solution and it
is core banking agnostic
- ModelBank: this entry has been added to
satisfy Temenos core banking mapping
and/or business rules
dbo.Batch
This log table stores general info for each Analytics ETL batch execution in InsightImport, Landing and
Source. It contains the following columns:
ColumnName Description
BatchId Record Id (identity column).
StartTime Date and time in which the batch started running.
EndTime Date and time in which the batch stopped running.
BatchStatus Processing status of the batch E.g. Processing,
Started etc.
LastEventLogID Id of the last event logged in EventLog and
EventLogDetails by this batch (only for multi-
threaded procedures in InsightImport, Landing and
Source)
LoginName The SQL Login whose credentials are used by the
batch to run. The format used will be Domain\Login
Name e.g. ANALYTICSSERVER\AnalyticsUser
HostName The server name in which the batch was running.
IsActive If set to 1 (e.g. True) this column defines that the
batch is currently active, while 0 (e.g. False) means
that the batch is inactive.
dbo.EventLog
This log table stores an entry for each main event and of each Analytics ETL batch.
Any successfully executed event taking place in the Advanced Analytics platform is recorded in this table
and each row represents a normal event.
If exceptions are encountered, however, an additional record to summarize the outcome of the failed task
will be written to the log. Also, a more will be a detailed log of the exception will be stored in
EventLogDetails, StagingEventLog and StagingEventLogDetails tables discussed below.
ColumnName Description
EventLogId Record Id (identity column).
dbo.EventLogDetails
This log table contains further details about the information logged in the EventLog table, but only for tasks
executed within Analytics ETL in the InsightImport, InsightLanding or InsightSource databases. Any entry
which is recorded here will have an EventLogId and will points back to its parent table EventLog.
Multithreading information from the XML output parameter of the Insight s_ParallelExecActionQueriesTx
stored procedure is parsed into this table. s_ParallelExecActionQueriesTx enables enhanced multi-threading
in the InsightImport, Landing and Source databases and is described in detail in the chapter about the
Insight database.
EventLogDetails does not store details logs for the core Extraction, Transformation and Load phases of
Analytics ETL, which are taking place in the InsightStaging database. InsightStaging-related logs are
instead recorded separately in the StagingEventLog and StagingEventLogDetails table, also hosted in
InsightETL.
ColumnName Description
EventLogDetailsId Record Id (identity column).
EventLogId Record Id of the corresponding EventLog entry
(foreign key).
EventTime Date and time in which the event took place
E.g. 2017-02-21 11:35:17.080
Action Action performed. Currently not in use.
ElapsedTime Time elapsed between the beginning of an event and
its completion
E.g. 00:00:00.023
ObjectTargeted Name of the object (e.g. the table) targeted by the
task, if applicable
E.g. AA_ARR_CUSTOMER
InvolvedModule Stored procedure executed, if applicable
E.g. InsightImport.Insight.s_ImportTable_Update
Severity Type of event logged. Available values are:
- Warning (for non-critical exceptions
encountered)
- Error (for errors which stopped the process)
Information (for normal events)
TaskStatus Status of the task executed, if applicable. Available
values are RanToCompletion, Faulted or null.
RowsAffected The number of rows affected by the task, if
applicable.
E.g. 42
Information Text providing information on the task executed and
its outcome.
E.g. s_Import_Control started (for Information log)
Could not dump data to *_tmp tables (for Warning)
Failed in creating a primary key. Check
[InformationDetails] (for Error)
InformationDetails Text providing more specific information regarding
the event if applicable
SQLStatement Original SQL Statement (action query) executed if
applicable. This may be in XML format
dbo.StagingEventLog
This log is updated by the stored procedures executing Extract, Transform and Load phases during the
core Analytics ETL process in InsightStaging. It tracks execution times for extract, transform and load
stages, rows processed, type1 or 2 changes, the number of updates and error messages for each
dimension, fact and bridge table. Use this log to review record counts, execution times or ETL run progress.
It stores general information regarding each table involved in the Staging process and EventLogId column
points back to its parent table EventLog.
dbo.StagingEventLogDetails
This table stored detailed information of tasks executed during the ETL process in InsightStaging for each
table involved. This is a transactional style log is updated by the InsightStaging stored procedures. Each
procedure will write to the StagingEventLogDetails at the beginning of the procedure and before any major
steps within the procedure. This log can be used to investigate process interruptions or failures, or specific
procedure execution times for core Extraction, Transformation and Loading phases of Analytics ETL.
dbo.CurrentDate
This table stores the date of the current ETL run.
For s_InsightLandingTable_Update this date must match the date of the data being loaded into
InsightLanding.
s_InsightStaging_update, i.e. the stored procedure which updates InsightStaging and carries out the core
Analytics ETL functionalities, will not run unless this date is the same as the date of the data being run.
dbo.SourceDate
This table stores the business date that will be used for each source system, and maps that date to the
actual business date. In the case where there are multiple entities for a source system with different dates,
there will be one row for the base source date and additional rows for each entity.
When a historical run is initiated, this table is used to pull the appropriate table based on the InsightLanding
table schema name. For cases where the business date may be different for different sources, e.g. BS
source vs Budget source, it is critical this table is setup correctly.
dbo.Translation
This table is used to store report label translations.
Language Defines the language for the report label. The same label
can have a different label for each language
implemented in the installation.
Configuration This column is used to make version control in this table
more consistent and easier to manage. Configuration
defines what the source for the current row is in the
table (provided out-of-the-box by Temenos, added later
by the client as a result of local development etc.) –
available values are:
- ModelBank: this entry has been added to
satisfy Temenos core banking mapping and/or
business rules
- Local: the entry is used during the
implementation to update or enhance
Framework or ModelBank functionality
- Framework: this entry has been added to the
TFS Framework solution and it is core banking
agnostic
- PBModelBank: Used for Private Banking
record definitions
- CampaignAnalytics: Used for Campaign
Analytics solutions
- Predictive: Used for Predictive Analytics
solution when deployed
dbo.TableRowCountAudit
This table stores the record count for databases involved in ETL processing (e.g. InsightImport,
InsightLanding etc.). It is populated by the s_PopulateAuditCounts during ETL.
Online.OnlineBatch
This log table stores general info for each online processing in InsightImport and Landing. It contains the
following columns:
ColumnName Description
BatchId Record Id (identity column).
OnlineBatchStart Date and time in which the online process started
running.
OnlineBatchFinish Date and time in which the online process stopped
running.
OnlineBatchSeconds Delta time between the start and the end of the online
process in seconds
OnlineBatchStatus Processing status of the online process E.g.
CompletedSuccessfully etc.
LastEventLogID Id of the last event logged in OnlineEventLog by this
online process
LoginName The SQL Login whose credentials are used by the
batch to run. The format used will be Domain\Login
Name e.g. ANALYTICSSERVER\AnalyticsUser
HostName The server name in which the batch was running.
IsActive If set to 1 (e.g. True) this column defines that the
online batch is currently active, while 0 (e.g. False)
means that the online batch is inactive.
MIS_DATE Business date in YYYY-MM-DD format
MinOnlineOutputId Id of the Minimum Online output
MaxOnlineOutputId Id of the Maximum Online output
NumRowsToProcess Number of rows to be processed
Severity Type of event logged. Available values are:
- Warning (for non-critical exceptions
encountered)
- Error (for errors which stopped the process)
- NULL (for normal events)
Online.OnlineEventLog
This table logs an entry for each main step of an online micro batch process. Detail information of the step
executed will be logged in the EventLog and EventLogDetails table (for multithreading activity).
If exceptions are encountered, however, an additional record to summarize the outcome of the failed task
will be written to the log.
ColumnName Description
OnlineEventLogId Record Id (identity column).
OnlineBatchId Id of the OnlineBatch used to track all activity for the
Online micro batch run
OnlineBatchStart Date and time in which the online process started
running.
Step Description of the online event
StepStart Date and time in which the step started.
StepFinish Date and time in which the step ended.
StepSeconds Duration of the step execution in seconds.
NumTablesProcessed Number of tables processed
NumRowsProcessed Number of rows processed
NumTablesSkipped Number of tables skipped
NumRowsSkipped Number of rows skipped
Severity Type of event logged. Available values are:
- Warning (for non-critical exceptions
encountered)
- Error (for errors which stopped the process)
- Notification (for normal events)
PartitionMeasureGroups (deprecated)
PartitionMeasureGroups was a table used to configure incremental cube processing, if at least one Analytics
Content Package ws installed in the Advanced Analytics Platform. Each entry in the table represented a
measures group.
TableInstructions (deprecated)
This table was used by the InsightDataManager to control which fields was read-only and which functions
are enabled or disabled on a MasterData view.
CustomColumn (deprecated)
In pre-R18 releases, this table stored data that can be customized for each installation. It had the details
of the column(s) being added and key columns that they were based on. For example, a new Product
hierarchy column can be based on a composite of product code columns. It also stored custom SQL code
that could be inserted into the ETL process.
There were five types of data that could be added to the CustomColumn table:
SourceDataFilter A logical expression that can be added to ‘where clause’ and result in
mapping to be applied to that portion of data. This can be null.
ExecutionPhase Stage of ETL that the operation will take place. Generally should be
‘Extract’ unless not possible. If not possible, verify with BI Analyst if there
needs to be another value, such as ‘Transform’.
ViewName Name of view given in order to abstract the underlying table and easier
to consume by end users. E.g. ProductClassification. If type column is not
set to 0 or 1, this should be left empty.
Configuration This column is used to make version control in this table more consistent
and easier to manage. Configuration defines what the source for the
current row is in the table (provided out-of-the-box by Temenos, added
later by the client as a result of local development etc.) – available values
are:
- ModelBank: this entry has been added to satisfy Temenos core
banking mapping and/or business rules
- Local: the entry is used during the implementation to update or
enhance Framework or ModelBank functionality
- Framework: this entry has been added by the TFS Framework
solution and it is core banking agnostic
- PBModelBank: Used for Private Banking record definitions
- CampaignAnalytics: Used for Campaign Analytics solutions
- Predictive: Used for Predictive Analytics solution when deployed
Enabled_ This column is used in connection with the Configuration column.
Acceptable values for Enable_ are 1 (which means Yes), 0 (which means
No) or NULL (also means NO).
If a table row has the Enabled_ flag set to 1, the Core banking table
definition defined in the row will be taken into consideration during the
Analytics ETL process, otherwise, it will be ignored. Enabled_ is used to
both exclude redundant Core banking tables from being loaded into
InsightImport and to disable obsolete table definitions which should not
be erased or overwritten.
CustomValue (deprecated)
In pre-R18 releases, the CustomValue table contained the values to be mapped. 15 Source and Custom
values are available to be mapped. This is applicable to Type 0 and Type 1 MasterData.
Views
Master Data views are views defined in the CustomColumn and CustomValue tables in order to create
virtual lookup tables to enable easier mapping of custom values. The Data Manager feature in the Analytics
web application uses the views to expose the mappings to the user.
V_ProductClassification Example
This demonstrates a Business Rule with type set to Lookup.
There can be many codes that identify a product in a source system and the codes can be cryptic. A
ProductClassification view is set up in InsightETL to group these codes into manageable categories for
analysis such as ‘Savings’, ‘Term Deposits’, ‘Mortage’. The following picture shows how the Product
Classification business rule appears in the RuleDefinitions table. Note the Operation column set to Lookup
and the name of the associated view defined in the TargetViewName.
The figure below shows what the corresponding records in the RuleValues table look like, instead. Please
note that the RuleDefinitionId for all the records shown matches with the RuleDefinitionId in RuleDefinitions
table above.
Figure 21 - RuleValues entries for Product Classification business rule (partial sample)
In the following picture, we can see how the rule designer configuration is reflected in the Product
Classification view.
The Analytics ETL process would populate ProductCode and the Users populate the other columns.
Analytics ETL will then add the mapped columns to the StagingAccount table.
V_ActiveAccount
This view is used by the system to set whether an account is Active in the system. It sets the IsActive flag
based on ClosedDateStatus, BalanceStatus, and AccountStatus.
V_SystemParameters
This view is used to provide custom run parameters were they needed for Currency, Date selection and
other custom requirements in InsightStaging.
V_ SystemParametersLanding
This view is used to provide custom run parameters were they needed for Currency, Date selection and
other custom requirements in InsightLanding.
V_COA
This view is used to provide Chart of Account mappings for the GL objects and financial reports.
Other views
The views above (and others) are provided by default, but each installation can add additional ones or alter
the existing views as needed.
v_AllLog
This view joins all aforementioned logging tables together, providing detailed info in the order of event
timeline.
Due to the amount of data returned, the user should try to use other filtered views first to obtain info in
interest.
v_ActiveLog
This view is based on InsightETL.dbo.v_AllLog. It returns only the logs related to the currently ACTIVE
batch.
v_ErrWarn3
This view is based on InsightETL.dbo.v_ActiveLog. It returns only the error logs (including ‘Warning’s) and
adjacent logs (one record before and one record after the error log).
v_ConvertCDPRules
This GDPR-related view maps the content of the Temenos Core Banking CDP_DATA_ERASED_TODAY table
to the InsightETL RuleDefinitions and RuleColumns column names.
v_ConvertCDPRuleCustomers
This GDPR-related view maps the content of the Temenos Core Banking CDP_DATA_ERASED_TODAY and
CUSTOMER_ACTIVITY tables to RuleCustomers column names.
v_ConvertCDPRuleCustomersRuleColumns
This GDPR-related view maps the content of the Temenos Core Banking CDP_DATA_ERASED_TODAY to
RuleCustomersRuleColumns column names
V_AgeGroup (deprecated)
This view was an example of a type 2 master data view where the customer age in banded into age groups
for easier reporting. Each client would configure the age ranges for their reporting needs.
RuleDefinitions
RuleColumns
RuleExpressionLevels
RuleValues
The dbo.s_LoadTemporalTables stored procedure (discussed next in this section) is doing the work of
populating the tables, either temporally or not. Foreign keys between the tables are also populated by this
procedure.
Inputs
@Json - JSON file passed from the Front end User Interface. The data type for this parameter is
Nvarchar(Max)
@UserName – User name associated with the Analytics user who inputted the rule.
@BusinessDate – Business date in which the new rule was inputted
Outputs
@RuleDefinitionIdMessageOut – returned output parameter in the format <success or failure
code>:< RuleDefinitionID > . The success of failure code can have two values i.e. 1 for error and
2 for success. If an error is encountered, further details will outputted in the ErrorMessageOut
parameter E.g. if the RuleDefinitionIdMessageOut is ‘1:1234’, the code 1 means the rule data was
successfully added but the rule did not run due to an error and the RuleDefinitionID is 1234. If the
message returned is ‘2:1234’, the code 2 means the rule 1234 was created and it ran successfully.
@ErrorMessageOut – Details of the error message, if any.
Example Call
DECLARE @p3 NVARCHAR(100)
SET @p3=NULL
DECLARE @p4 NVARCHAR(max)
SET @p4=NULL
EXEC S_createrulesfromui
@Json=
dbo.s_LoadTemporalTables
Description
This stored procedure loads a view or table into another table using a dynamic SQL merge. It can match
on the table’s surrogate key or a hashed natural key. It can load temporally or not temporally (where
historical changes are preserved).
Inputs
@SourceDataBaseName - The database in which the source table or view is.
@SourceSchemaName – The schema of the source table or view to be loaded into a target
table.
@SourceTableName - The name of the source table or view to be loaded into a target table.
@TargetDataBaseName – The database in which the target table or view is, i.e. of the table or
view that is going to be loaded with source data.
@TargetSchemaName – The schema of the target table or view to be loaded into a target table.
@TargetTableName - The name of the target table or view to be loaded into a target table.
@BusinessDateFrom - If the source data has the IsTemporal column set to 1 then the value of
this parameter will be the businessdate of the current load. Else this parameter will be set to 1900-
01-01.
@BusinessDateTo – Business date in which the new rule will expire. This parameter is set to
9999-12-31 for new records. For updated records, if IsTemporal column’s value is 1, this
parameter’s value will be set to the current BusinessDate.
@BusinessDate - The current business date.
@RowHashColumnName - The name of the binary(20) column with row hash value used to
compare source and target records. Source and Target Tables need to have a RowHash column
for this StoredProcedure to work.
@SourceNaturalKeyColumnName - The name of the natural key (of type binary(20)) of the
target table.
@SurrogateKeyColumnName - The name of the surrogate key of the target table.
@AllowDelete - If set to 1, this parameter allows IsActive to be set to 0 when the
NotMatchedBySource condition is met.
@UserName – User name associated with the Analytics user who inputted the rule.
@MatchColumn Int – This parameter is used for choosing the matching column to use for
the dynamic merge. Acceptable values are: 1 = Match on the Target tables surrogate key, eg.
RuleDefinitionId and 2 = Match on the Target tables sourceNaturalKey, eg. SourceRuleDefinitionId.
@ParentSurrogateKeyColumnName – The name of the parent table surrogate key, eg.
RuleDefinitionId.
@Debug – This parameter should be set to 1 for debug information to be shown when the
procedure is running in SSMS.
Outputs
@ParentSurrogateKeyColumnMessageOut – This parameter stores message for consumption
by the front end. It consists of the concatenation of two parameters i.e. a SuccessFail code (i.e. 1
or 0) and the SurrogateKeyColumnID of the Created record. Eg. 1:1234.
@ParentSurrogateKeyColumnValueOut Int - The surrogate key value of the parent target
table.
@ErrorMessageOut nvarchar(Max) - The error message, if applicable
@SuccessFailOut – Success or fail code i.e. 1 for success or 0 for fail.
Example Call
Exec s_LoadTemporalTables
@SourceDataBaseName,
@SourceSchemaName,
@SourceTableName,
@TargetDataBaseName,
@TargetSchemaName,
@TargetTableName,
@BusinessDateFrom,
@BusinessDateTo,
@BusinessDate,
@RowHashColumnName,
@SourceNaturalKeyColumnName,
@SurrogateKeyColumnName,
@AllowDelete,
@UserName,
@MatchColumn,
@ParentSurrogateKeyColumnName,
@Debug,
@ParentSurrogateKeyColumnMessageOut = @RuleDefinitionIdMessageRuleDefinitions output,
@ParentSurrogateKeyColumnValueOut = @ParentSurrogateKeyColumnValueOut output,
@ErrorMessageOut = @ErrorMessageOut output,
@SuccessFailOut = @SuccessFailOut output;
dbo.s_CreateRuleGroup
Description
This stored procedure looks after the execution of the rules defined through the Rule Engine. It reads the
configuration tables for RuleDefinitions, RuleColumns, RuleExpressionLevels, RuleValues and RuleFilters
and creates a new base table or view with columns added as a result of defined rules.
Inputs
@DataBaseName - The database in which the rule is.
@TableName - The name of the table storing the rule.
@SchemaName – The schema of the table storing the rule
@ExecutionPhase - core Analytics ETL phase in which the operation is taking place (only
applicable to Dataset operations)
@ExecutionStep - core Analytics ETL step in which the operation is taking place (only applicable
to Dataset operations)
@BusinessDate - The current business date.
@IsPersisted – If set to 1, the rule will add a physical column to a base table. If set to 0, the rule
will create a view around a base table with the column(s) added to the view.
@RuleDefinitionId – RuleDefinitionId value
Outputs
@RuleDefinitionIdMessageOut – returned output parameter in the format <success or failure
code>:< RuleDefinitionID > . The success of failure code can have two values i.e. 1 for error and
2 for success. If an error is encountered, further details will outputted in the ErrorMessageOut
parameter E.g. if the RuleDefinitionIdMessageOut is ‘1:1234’, the code 1 means the rule data was
successfully added but the rule did not run due to an error and the RuleDefinitionID is 1234. If the
message returned is ‘2:1234’, the code 2 means the rule 1234 was created and it ran successfully.
@ErrorMessageOut nvarchar(Max) - The error message, if applicable
@SuccessFailOut – Success or fail code i.e. 1 for success or 0 for fail.
dbo.s_DQBulkCopyCSV
Description
This SQL-CLR procedure is used to perform multithreaded Data Quality Import process, using CSV files as
sources. It is internally called by InsightImport.Insight.s_DQImportTable during Analytics ETL.
Inputs
@SourceCSVType – Type of Source CSV file used.
@TargetTableName – Name of the Target Table.
@IsIdentityInsertOn – Defines whether explicit values can be inserted into the identity column
of the table (1) or not (0)
@ParallelismDegree – The maximum number of worker threads that can be used in parallel.
The default setting is -1, which means to determine automatically.
@Encoding – Character encoding. For Temenos Core Banking CSVs, the default value for
@Encoding parameter is set as 1683 for UTF-16 Big Endian. Other supported values could be
7(UTF-7), 8(UTF-8), 1673(UTF-16LE), 3273(UTF-32LE), 3283(UTF-32BE) or 18030(GB-18030).
Outputs
@xmlLog - The details of multithreading info are outputted from the SQL-CLR stored procedure
into an XML variable.
@TotalRowsTransferred – Total number of rows transferred
@RowsRevised – Total number of rows revised
@FieldsRevised – Total number of columns revised
dbo.s_DQParseMulti-value
Description
This SQL-CLR procedure is used to perform multithreaded multi-value parsing with Data Quality, using the
corresponding base table as source. It supports parsing all types of ‘sub’ tables, including LR, LRSV, MV
and MVSV.
Inputs
@TargetTableName – Full name of the target sub table.
@IsIdentityInsertOn – Defines whether explicit values can be inserted into the identity column
of the table (1) or not (0)
@ParallelismDegree – The maximum number of worker threads that can be used in parallel.
The default setting is -1, which means to determine automatically.
Outputs
@xmlLog - The details of multithreading info are outputted from the SQL-CLR stored procedure
into an XML variable.
dbo.s_ParallelExecActionQueriesTx
Description
This SQL CLR stored procedure is executed and executes in parallel a series of T-SQL action queries within
each of the aforementioned ETL stored procedures. Action queries are used to copy or delete source system
data, create new tables and update other queries during Analytics ETL processing.
s_ParallelExecActionQueriesTx also outputs detailed logs, including threading information, warning, any
encountered errors and the outcome of each task by individual action queries monitored.
To better understand how this process works, let us take the example of the s_ImportTable_Update stored
procedure in InsightImport. The core action to be performed by this routine is considered an individual
action query i.e. the bulk insert statement which populates Import tables from CSV files.
When s_ParallelExecActionQueriesTx is called within the s_ImportTable_Update stored procedure, the bulk
import statement within the Import Table Update procedure is assigned to the @xmlQueries input
parameter of the muti-threading SQL CLR procedure as follows.
As we can see the SELECT statement used to assign a value to @xmlQueries is followed by three elements
aliased as ActionQuery, Target, and Source. The ActionQuery column contains, as text, the action query to
be executed in parallel. The other two elements, which are also required by @xmlQueries, will be
explained in the Inputs section below.
SELECT @xmlQueries = (
SELECT
N'BULK INSERT [InsightImport].[dbo].[v_' +
@TableName + '] FROM ' + QUOTENAME(@PathName + '\' + BIFileName + '.csv', '''') +
N' WITH ( FIRSTROW=2, FIELDTERMINATOR=' +
QUOTENAME(CHAR(126), '''') + ', ROWTERMINATOR=''\n'', CODEPAGE=''1201'',
DATAFILETYPE=''WideChar'', MAXERRORS = 0, TABLOCK );'
AS [ActionQuery],
@TableName AS [Target],
@InvolvedModule AS [Source]
FROM (
SELECT DISTINCT BIFileName,
LTRIM(RTRIM(TableName)) AS TableName
FROM [Insight].[v_ImportFileList]
WHERE TotalRecords > 0
AND 1 = CASE WHEN @TableName = N'ALL' THEN
1
WHEN TableName =
Replace(@TableName,'_tmp','') THEN 1
ELSE 0
END
) AS A
ORDER BY BIFileName DESC
FOR XML RAW('Task'), ROOT('Root'), ELEMENTS),
@sql = CONVERT(NVARCHAR(MAX), @xmlQueries);
This action query is composed of multiple tasks as the above action query has to be started, executed and
completed for every individual object (i.e. table) to be processed. Each individual task in the action query
will be carried out via a separate connection by a randomly assigned individual thread, which runs in the
background. In addition to this, other two input parameters will control the maximum number of threads
that can run concurrently and whether to stop or continue the process if one of the tasks executed fails
while the other are still running.
The execution order of tasks by each thread is determined based on workload and resource availability and
it normally happens randomly, which is acceptable for the data processing required in Import, Landing and
Source databases as no dependencies or special ordering are required for creating their tables and loading
data into them.
The output of each task executed by a thread will be stored in a log in XML format, which can be parsed
and saved to the logging tables in the InsightETL database.
Transactional Multi-threading
This kind of multi-threading configuration could potentially generate database deadlocks if a block of
interwoven T-SQL statements are multithreaded instead of multithreading at the finest granular level (in
the latter scenario, data-accessing conflicts are much easier to handle).
To avoid this kind of issues, s_ParallelExecActionQueriesTx has the ability to do transactional multithreading
(thus has the ‘Tx’ suffix which standing for transaction), meaning that locks are applied on the objects
being manipulated only for the duration of an individual transaction and not while the entire stored
procedure or Analytics ETL job are being executed – this will ease compliance with ACID standards (i.e.
Atomicity, Consistency, Isolation, and Durability), if required. Details of how the transaction duration can
be configured through a series of optional input parameter that will be discussed in the next section.
Inputs
@xmlQueries - This input parameter stores the action query to be multi-threaded by
s_ParallelExecActionQueriesTx. All concurrent T-SQL tasks are described and submitted to the
procedure in this XML parameter. Each task consists of three elements including:
1. ActionQuery
This element is identified by a keyword either tagged as <Action>…</Action> or
<Query>…</Query>. The task is represented by one or multiple T-SQL statements listed
between the tags. If there are multiple statements, they must be delimited by
semicolons.
Supported T-SQL statements include most of the action queries which do not return row
sets. For each task, the number of rows affected is based on the last T-SQL statement of
the task.
2. Target
This is the object to be targeted e.g. the name of the table to be loaded into a database.
This optional element is identified by a pair of XML tags <Target>…</Target>. It helps
logging be more self-explanatory if the targeted database entity is clearly stated.
3. Source
This is the Source Procedure and this optional element is identified by a pair of XML tags
<Source>…</Source>. The name of calling procedure can be listed here for each task.
@ParallelismDegree - The maximum number of worker threads that can be used in parallel.
The default setting is -1, which means to determine automatically.
@ContinueWithError - When the cancellation feature is enabled (i.e., this parameter is set to
0, i.e. False), as soon as the first error is encountered in any thread, other running tasks can be
immediately aborted and the remaining tasks still pending will be canceled; on the other hand, if
cancellation feature is disabled, all the tasks will be carried out all the way independently to the
end, regardless the outcome of other tasks. The default setting is True (1).
@UseTransactionalThread: Every single thread carrying out the tasks can be set as
transactional or not. The default setting is 0, i.e.False.
It is worth noting that this option is for the inner transaction that a worker thread directly runs
within. As for the possible ambient transaction controlling from outside of the SQL-CLR stored
procedure, it is always suppressed.
Note that ‘Chaos’ mode is not supported in SQL Server Database Engine, therefore
‘@IsolationLevel = 0’ is not a valid option in the SQL-CLR stored procedure.
Outputs
@xmlLog -- The details of multithreading info are outputted from the SQL-CLR stored procedure
into an XML variable.
This XML output parameter can be parsed and shredded into a table by utilizing the helping
table-valued function dbo.fn_GetXmlLogFromParallelExecActionQueriesTx. Data logged
by this function based on the content of the @xmlLog output parameter will be discussed in the
section dedicated to the logging tables.
USE InsightImport;
DECLARE @xmlTasks XML, @xmlLog XML, @sql NVarChar(max), @spReturn Int,
@Info NVarChar(max), @EventLogID Int;
SELECT
@xmlTasks = (
SELECT
CONCAT('EXEC InsightImport.Insight.s_Attributes_Update @EntityID = ', EntityID,
', @BatchNum = 5678') AS [ActionQuery],
Name AS [Target],
's_Import_Control' AS [Source]
FROM InsightImport.Insight.Entities
ORDER BY EntityID
FOR XML RAW('Task'), ROOT('Root'), ELEMENTS),
@sql = convert(nvarchar(max), @xmlTasks);
-- Logging
SET @Info = '[Multithreading]: ' +
iif(@spReturn = 0,
'Successfully updated Attributes ',
concat(@spReturn, ' error(s) encountered while updating Attributes '));
dbo.s_ExecSingleActionQuery
Description
This stored procedure is part of the multi-threading or paralleling process. It is used for creating primary
keys in parallel when InsightSource.dbo.s_InsightSource_Update is run. It is particularly useful because it
is able to return any error message thrown by the SQL Server Data Engine and it is consequently used in
the batch control process, to update the Batch table in InsightETL.
s_ExecSingleActionQuery can run a T-SQL action query in either async or sync mode within CLR and returns
all error messages in case a thread fails, so this stored procedure can be used also for logging purposes.
In the async mode, the action query is run by an auxiliary worker/background thread and the main thread
does not stop to wait for the auxiliary worker to complete its thread. On the contrary, the main/calling
thread continues to the next task (for example to create other PKs for other tables) as soon as the auxiliary
worker thread is started. The worker thread runs in async fashion and will report exceptions to the main
thread once it returns.
In the sync mode, the main thread will wait for the auxiliary thread to complete its task before moving on
to the next one. However, s_ExecSingleActionQuery will report exceptions in the same way as in the async
mode.
In spite of this procedure being able to return all error messages, the built-in TRY-CATCH in T-SQL syntax
only captures the very last error in its ERROR_MESSAGE() function and miss the first error message.
dbo.s_Parse_XML
Description
This stored procedure is used to load XML data into tables, normally during the post-deployment process.
dbo.s_MultiStringSplitToRows
Description
This CLR stored procedure split multi-value columns and distributes them into sequential rows. Columns
are typed as specified in the parameter @MultValueColumnSchemaList. This procedure is called during the
Import process by s_T24AllMulti-value_Add which in turn is executed within s_Import_Control in
InsightImport.
Inputs
@InputString – string to be inputted, varchar(max)
@Delimiter – delimiter to be used, varchar(1)
@Multi-valueColumnSchemaList – list of multi-value columns, varchar(max)
@TypeofScriptOutput – type of output scritp, int
dbo.s_StringSplitToColumns
Description
This CLR stored procedure splits a local ref column into multiple columns. Columns are either string typed
or other data types as specified by the parameter @LocalRefSchemaList. This procedure is called during
the Import process by s_T24AllMulti-value_Add which in turn is executed within s_Import_Control in
InsightImport.
Inputs
@InputString – string to be inputted, varchar(max)
@Delimiter – delimiter to be used, varchar(1)
@LocalRefSchemaList – list of Local ref schemas, varchar(max)
@LocalRefColumn – list of Local ref columns, varchar(128)
dbo.s_CreateColumnCalculations
Description
The stored procedure carries out the split, calculation and dateset operations described in the
AttributeCalculations table. It can run manually for testing purposes or it can be included into the Analytics
ETL SQL Server Agent job or script in the appropriate position. E.g. In the case the Splits and Calculations
entries on Imported Data are used to make later Extract logic in v_source views simpler and faster, the call
would be placed after all the InsightImport steps.
Inputs
@DatabaseName – name of the database for which the stored procedure will be executed
@TableName– name of the table for which the stored procedure will be executed
@SchemaName – name of the schema of the table for which the stored procedure will be
executed
@ExecutionPhase - core Analytics ETL phase in which the operation is taking place (only
applicable to Dataset operations)
@ExecutionStep - core Analytics ETL step in which the operation is taking place (only applicable
to Dataset operations)
@BatchNum – Batch number used to execute the stored procedure.
dbo.s_CreateIndexes
Description
The stored procedure creates additional indexes, as per what defined in the Indexes configuration table. It
can run manually for testing purposes or it can be included into the Analytics ETL SQL Server Agent job or
script in the appropriate position. E.g. If additional indexes are required to be placed on foreign keys in the
InsightSource database, the call would be placed after InsightSource update and before InsightStaging
Update.
Inputs
@TableName– name of the table for which the stored procedure will be executed
@DatabaseName – name of the database for which the stored procedure will be executed
@TenantId – Id of the database tenant
@BatchNum – Batch number used to execute the stored procedure.
dbo.s_PopulateAuditCounts
Description
This procedure populates the dbo.TableRowCountAudit log with the record counts for the databases
involved in ETL (e.g. InsightImport, InsightLanding etc.). This stored procedure is executed several times
during various stages of ETL.
Inputs
@DatabaseName – name of the database for which the stored procedure will be executed e.g.
InsightImport
@ExtractListSourceName – name of the source system for which the stored procedure will be
executed e.g. BS
@SchemaName – name of the schema of the table for which the stored procedure will be
executed e.g. dbo
@TableType – Type of Table for which the stored procedure is executed e.g. ‘Hash Table’, 'DMV',
'Select' etc.
@ETLPhase – name of the ETL Phase for which the stored procedure will be executed e.g. dbo
@TableName – name of the table for which the stored procedure will be executed. ‘All’ keyword
is acceptable
dbo.s_EventLog_Add
Description
This stored procedure adds new rows to the EventLog table.
dbo.s_EventLogDetails_Add
Description
This stored procedure adds new rows to the EventLogDetails table.
dbo.s_MergeUpdateBatchStatus
Description
This stored procedure creates a new batch in the Batch table with a new batch number or it continues to
use the current active batch and performs in-place updates to the involved Batch record.
dbo.s_SetBatchStatusFinished
Description
This stored procedure marks the completion of any batch executed, either ‘CompletedWithError’ if there
are error(s) or ‘CompletedSuccessfully’ when no error is encountered.
dbo.s_StagingEventLog_Add
Description
This stored procedure adds new rows to the StagingEventLog table.
dbo.s_StagingEventLog_Update
Description
This stored procedure in-place updates to the rows in the StagingEventLog table.
dbo.s_StagingEventLogDetails_Add
Description
This stored procedure adds new rows to the StagingEventLogDetails tables.
dbo.s_InsightETL_Purge
Description
This stored procedure purges the content of the Log tables within the InsightETL database i.e. EventLog,
EventLogDetails, StagingEventLog and StagingEventLogDetails.
dbo.s_InsightETL_RangePurge
Description
This stored procedure internally calls the s_InsightETL_Purge stored procedure to purge Log tables within
a certain date range.
dbo.s_ColumnStoreIndex_Defragmentation
Description
This stored procedure is used for Index defragmentation in the Insightlanding database. Even if this
procedure is stored InsightETL, it operates in InsightLanding and it is described in detail in a dedicated
section of the chapter of this document dedicated to InsightLanding.
dbo.s_LoadCDPRules
Description
This stored procedure loads General Data Protection Regulation / Customer Data Protection metadata from
Temenos Core Banking into the Analytics rules engine so that update statements can be generated to erase
certain customer attributes. More details about the usage of this and other CDP-related procedures is
provided within the chapter of this document dedicated to General Data Protection Regulation.
dbo.s_LoadCDPAnalyticsRules
Description
This procedure loads metadata from InsightWarehouse datadictionary into the Analytics Rules Engine for
the purposes of Right to Erasure for CDP/GDPR. This procedure is for Analytics data erasure in
InsightWarehouse, as opposed to raw Temenos Core Banking data in InsightLanding. The following tables
are loaded:
CDPPurpose
RuleDefinitions
RuleColumns
RuleCustomersRuleColumns --it's assumed that the customers will be loaded already from Temenos Core
Banking sources.
dbo.s_CreateCDPUpdateLogic
Description
This procedure produces an update statement given a list of columns.
dbo.s_ExecuteCDPRules
Description
This procedure executes the CDP-related rules and logs the results.
dbo.s_CreateCDPUpdateStatements
Description
This procedure creates update statements for CDP-related rules and logs the results.
Online.s_OnlineUpdateBatchStatus
Description
This stored procedure creates a new batch in the Online.OnlineBatch table with a new Online batch number
or it continues to use the current active batch and performs in-place updates to the involved Online Batch
record.
dbo.fn_GetFromJson_RuleDefinitions
Description
This table-valued CLR function accepts a JSON file and returns a dataset to load into RuleDefinitions.
Example Call
declare @json nvarchar(max)=
N'[--JSON in here--
]
';
Select *
From InsightETL.dbo.fn_GetFromJson_RuleDefinitions(@json);
dbo.fn_GetFromJson_RuleColumns
Description
This table-valued CLR function accepts a JSON file and returns a dataset to load into RuleColumns.
dbo.fn_GetFromJson_RuleExpressionLevels
Description
This table-valued CLR function accepts a JSON file and returns a dataset to load into RuleExpressionLevels.
dbo.fn_GetFromJson_RuleValues
Description
This table-valued CLR function accepts a JSON file and returns a dataset to load into RuleValues.
dbo.fn_GetXmlLogFromParallelExecActionQueriesTx
Description
This Insight function is used to populate the EventLog and EventLogDetails tables in InsightETL with the
output logs created by the threads which execute parallelized action queries during Analytics ETL. In other
words, each stored procedure relying on Analytics enhanced multi-threading will log its activity in InsightETL
through this Insight function.
dbo.fn_GetCurrentBatch
Description
This scalar-valued multi-threading-related function assigns activated batches to execute tasks.
dbo.fn_GetJsonErrMsg
Description
This table-valued function can be used with the APPLY operator to format the JSON error messages in the
ETL_DQ_ErrorMessage column and output them as separated fields
dbo.fn_GetMultithreadingLog
Description
This table-valued function is used to display or archive threading logs.
This stored procedure is used for InsightLanding index maintenance. More information is provided in the
Maintence section of the InsightLanding chapter.
Online.fn_OnlineGetCurrentBatch
Description
This scalar-valued function returns the Online Batch Id of the active Online micro batch.
s_IMD_AllViews (deprecated)
In pre-R18 releases, this InsightMasterData stored procedure created views for all Type 1 and Type 0
master data records by joining the CustomColumn table to the CustomValues table on the appropriate
CustomColumnID.
s_IMD_Views (deprecated)
In pre-R18 releases, this InsightMasterData stored procedure created a view for a particular
CustomColumnID.
Input
CustomColumnID – The id of the CustomColumn table for which a view needs to be created
InsightStaging..s_Transform_MD (deprecated)
In pre-R18 releases, this stored procedure (in the InsightStaging database) updated staging tables and fact
tables based on the MasterData stored in the CustomColumn and CustomValue tables.
Inputs
TableName – Name of the InsightStaging table to be updated (e.g. StagingAccount)
ETL Phase – ETL phase in which the update should take place (i.e..Extract or Transform)
e.g.: Exec s_transform_MD ‘StagingAccount’, ‘Extract’
s_InsightEditor_TableLoad (deprecated)
In pre-R18 releases, this stored procedure was used by the DataManager application to load tables.
Input:
Schema – Schema of the table to be loaded into Data Manager
Table – Name of the table to be loaded into Data Manager
Where – Any filters (i.e. where clause) to be applied to the data loaded
OrderBy – Sorting order (i.e. order by clause) to be applied to the data loaded
s_Translation_Check (deprecated)
In pre-R18 releases, this stored procedure checked the quality of Translations data.
Output
List of Item Names that have different Translation across same or different object types
List of Column Names in the translation table that are not present in the data dictionary
List of Column Names in the Data Dictionary table that do not a have corresponding translation
List of Cube attribute or measures that do not belong to any display folder - deprecated
s_ParallelExecActionQueries (deprecated)
Description
This is the previous version of s_ParallelExecActionQueriesTx, mainly used in R16 but still available for
backward compatibility.
It has the ability to capture errors happened in other threads; to cancel tasks scheduled in other threads,
and control the maximum number of worker/background threads (e.g. the parameter
@MaxLogicalCpuNumber or in the newer version @ParallelismDegree). However, it has no ability to perform
transactional multithreading.
s_ExecuteInParallel (deprecated)
Description
This is a very early version of multithreading CLR procedure which became obsolete after R15.
s_ParseMultiStringToColumns (deprecated)
Description
This stored procedure was used for parsing tables with multi-values and sub-values in pre-R15 releases
and is still available for backward compatibility. It is now replaced by the s_MultiStringSplitToRows stored
procedure in Insight MasterData.
s_ParseStringToColumns (deprecated)
Description
This stored procedure was used for parsing tables with local references in pre-R15 releases and is still
available for backward compatibility. It is now replaced by the s_StringSplitToRows stored procedure in
Insight MasterData.
s_Translate_Report (Deprecated)
Description
This is used by the translator module to add the translated report labels to the Reporting Services reports.
Inputs:
Smexml xml
ssasName nvarchar(100)
s_Translate_Cube (Deprecated)
This is used by the translator module to translate cube labels.
Inputs:
Smexml xml
ssasName nvarchar(100)
ssasDBName nvarchar(100)
Triggers
Online. Tr_OnlineBatch_AfterIU_DeactiveOldBatches
Description
This trigger on the Online.OnlineBatch table updates all other old Online batch records to be inactive except
the last one inserted/updated
Configuration
This section describes how the various functions in InsightETL should be configured.
InsightETL SourceDate needs to contain a record for each of the dates that Analytics ETL will be run for,
for each source system.
This table maps the date of the source system data (Source Date) to a Business Date. These dates could
be different, this is a common occurrence when two source systems are loaded, one with a daily frequency
and one with a monthly frequency. The latest available date for the monthly source system would be used
for every business date until the next month’s data is available.
The system allows you to override the default source dates and control how the dates are created using
the v_System_Parameters InsightETL view. Dbo.s_InsightSource_CSI_Update populates this table with
the appropriate date, or you can update it directly and set the ‘IsDateOverride’ flag for full manual control
of the source date.
Rules Engine
Users can define new rules and modify existing ones using the Data Manager option within the Analytics
application – please refer to the Analytics Front End user Guide for more information on how this is done.
An out-of-the box set of business rules applied to the InsightStaging database will be provided by Temenos.
Users can however create business rules that, during Analytics ETL, will be applied also to tables or views
residing in the InsightSource and InsightImport database.
If the financial institution requests it, Temenos can also enable a functionality that allows to design business
rule also in the InsightLanding database’s tables or views and in the InsightWarehouse’s abstraction views.
Column Splits
The set up of business rules used to split compound columns is currently not possible through the Rules
Engine. Split rules can only be designed using the AttributeCalculation table. Let us see an example to
understand how this kind of configuration works. In the figure below we can see the relevant columns of
a split definition in the AttributeCalculations table. The locally developed split below is used to break up
into three parts the LIMIT @ID, that is a compound primary key consistsing of the values of CUSTOMER
@ID, LIMIT.REFERENCE @ID and a sequence number separated by a dot (‘.’). As defined below, the split
will be applied to the LIMIT table of the InsightSource database.
Like the business rules defined through the Rules Engine, also split rules managed in the
AttributeCalculations table are executed within the Analytics ETL flow. Specifically, split rules are applied
by the s_CreateColumnCalculations stored procedures that, in the out-of-the-box Analytics ETL agent job
provided by Temenos, is run twice, first on the InsightImport database and then on the InsightSource
database within the Insight Attribute Calculations-Import and Insight Attribute Calculations-Source steps,
respectively. The commands within each step are identical with the exception of the first input parameter’s
value, i.e. the database name.
In our example, the split definition indicates that the rule should be appled in the InsightSource database,
so the Limit Id column will be split in InsightSource when the Attribute Calculations-Source step of Analytics
ETL is executed.
Figure 25 - Analytics ETL steps executing column splits and their commands
Once the relevant step of the Analytics ETL is executed, the structure of the target table will be modified
to contain the new columns resulting from the split.
Figure 26 - Example of three more columns added to InsightSource.dbo.LIMIT as a result of the split rule
Maintaining InsightETL
Probably the most common maintenance task is to ensure that the additional mapping defined in InsightETL
is up to date. InsightETL contains additional Rule Definitions definitions that are mapped based on the
content of RuleValues or RuleExpressionLevels.
As new source data comes into Analytics, new Rules values can be created using the Data Manager facilities
in the Analytics web application. Depending on the type of business rule that is defined in Analytics these
new values may create the need to define the additional mapping in Analytics. For a full understanding of
the different business rule types/operations please refer to the section on InsightETL earlier in this
document.
Lookup Rules
The most common business rule’s type that needs to be maintained is Lookup. This is the type of tule that
is used to build custom classifications or hierarchies. One or more source columns is defined as a key for
the target column and every time a unique source column is brought in a new row is created in the view
for that target.
As an example, if we are defining a product or account hierarchy based on a unique product code coming
from the source system then we would use the product code as the source value for the target column and
one or more custom values. In this example, we will define the view for the target column as
ProductClassification. When a new product code is brought in by the ETL process then a new record will
be created in the ProductClassification view for the new source value with N/A as all target column values.
A user will have to replace all N/A values with appropriate mapping values, in this case, the appropriate
product classifications. Follow the example below to see a typical lookup’s target column update where we
are defining the Category and Classification custom columns for the account object.
When ETL is run, the new values are brought into the warehouse as N/A. At this point, we need to replace
the N/A values with appropriate mapping so the new product can be mapped to the appropriate account
category and classification. This is done using the Data Mappings tab of the Edit Rule screen of the Analytics
web front end.
Figure 27 - Updating Lookup mapping using the Data Mappings tab in the Edit Rule screen
For Lookup rules, the screen above will update the RuleValue table and the associated InsightETL view,
e.g. v_ProductClassification in our case, as shown in the following table.
Once the above update has been made and ETL has run again then the N/A values for Category and
Classification for product code 1010 will be replaced with the updated mapping in v_Classification.
This maintenance has to be done for every Lookup business rule in InsightETL as new data is entered.
Take note that two ETL runs have completed before the new source column value has been classified and
the new custom column value has been added to the warehouse.
It is recommended, depending on the Master Data configuration view, for example for v_COA (Charter of
Accounts) and v_InsightHierarchy (internal hierarchy of the COA), that as soon as you update or add a GL
Account / Line in Core Banking or the GL source system, you also add/update the same record in these
two views via Data Manager before the end of the day, hence when the ETL runs the proper hierarchy and
attributes are found and mapped correctly for the new records in the InsightWarehouse, otherwise the new
GL Accounts will have their attributes with default values of ‘N/A’, at this point you will need to change
these values in Data Manager and then reprocess the ETL to reflect the proper mappings for the new GL
Accounts and the existing reports will show accurate data. The Data QA report will also show the number
of ‘N/A’ values that the system currently has so that they can be addressed accordingly.
Banding Rules
While less common, Banding business rules could require an update if the case statement (banding logic)
did not include all source column value possibilities. For example, if there is no record to account for a
source value of NULL then the target column value will be brought in as NULL. The target column will have
a NULL value anytime the source column does not have a statement to account for the source value.
In this example, we are not accounting for cases when the source column Age has a NULL value. Anytime
we have a NULL Age then the custom column Age Group will also be NULL. To add a new row for the NULL
age, we should again use the Data Mappings tab of the Edit Rule screen in the Analytics web front end.
We can click on the Add a row button as shown below to include a blank row to the list of buckets associated
with our banding rule, then complete the required fields. Please note that, unlike for Lookups, the screen
below will not affect the RuleValues table this time. Instead, when we edit or add a new group in a banding
rule, the RuleExpressionLevels table will be updated.
Now, after we have added the new record to RuleExpressionLevels, any time a NULL value is found for Age
then it will use the value No Age for the Age Group custom column. Since all conditions are now being met
by our banding statement, we should no longer have any NULL values for the Age Group target column in
the warehouse.
Additionally, this interface can be configured so end users can only edit target columns for Banding rules
that they are intended to edit which allows for detailed control of what master data can and should be
modified by end users.
For a full explanation of using the Data Manager option please consult the Analytics Web Front End User
Guide.
Important Note: Please note that changes to maintenance stored procedures are
not supported.
InsightStaging
Overview
InsightStaging extracts data from InsightSource database tables, transforms it and then loads it into the
InsightWarehouse database. Most of the InsightStaging tables are temporary and recreated for each
Analytics ETL execution.
Interfaces with Analytics InsightStaging can exchange ETL data with Analytics optional modules e.g.
optional modules Customer Profitability, Predictive Analytics etc. For example, if Customer
Profitability is installed, this module will calculate Customer Monthly Net
Income and other parameters which will be added as columns to the
temporary tables in InsightStaging and finally loaded into the Warehouse.
Table driven orchestration The flow of the Analytics ETL is controlled based on the content in the
of ETL procedures. UpdateOrder table. Analytics ETL components can be easily added, changed
or removed.
Configuration of new source Source systems can be added and enabled or disabled.
systems
Technical Details
Architecture
The basic architecture of InsightStaging data flow is depicted in the following diagram which shows the
high-level flow for the Extraction, Transformation, and Loading of a Dim and Fact Combination.
Technical Components
InsightStaging consists of the following components.
Tables
dbo.Systems
This table controls which source systems are included in the Analytics ETL process. Using a different source
system allows for a different Analytics ETL configuration per source system, and allows data extraction for
a system to be turned off for a particular Analytics ETL run if required.
ColumnName Description
SourceSystem The name of the SourceSystem, which must match
sourcename schema in extract list. E.g. BS, Budget
etc.
dbo.UpdateOrder - Standard
This table controls the execution order of Analytics ETL processes executed during the InsightStaging
update process.
The table has two columns that allow processes to be enabled or disabled, i.e. Enabled_ and Exclude. The
value of the Enabled_ column defines whether the source system in which a certain process belongs is
enabled or disabled in the current Analytics installation. The value of this column is inherited from the value
assigned to the Enabled_ column for its source system entry in the Systems table. Processes will only run
if their associated source system is enabled in this table.
If Enabled_ is set to 1, the Exclude column allows excluding any individual step or a group of steps from
today’s Analytics ETL process but also permits to enable them at another time.
For example, the Insight Pricing update can be turned off or the FactGL could be initially disabled but then
enabled when source data becomes available.
Important Note: the value of the column Enabled_ in Update Order should not be
updated directly, but only by changing the value of the Enabled_ column for the
associated source system entry in the Systems table. The content of the column
Enabled_ in Update Order is regenerated based on the content of the System table so
direct changes will not be preserved after an Analytics ETL run.
- Extract
- Extract.substep
- Merge.substep
- Add.substep
- Transform
- Transform.Substep
- Load
Enabled_ Defines if the source system in which a process belongs is enabled. Acceptable values
are 1 (for Yes) or 0 (for No) and they are dependent on the Systems table.
Exclude Configure this column to exclude the current process from Analytics ETL. If set to 0,
the process is included and will be executed in the next ETL run. If set to 1, the
process is excluded and will be ignored in the next ETL run.
ETLPhaseNum One digit code for the ETL Phase of the currently defined process. Acceptable values
are 1 to 7. It defines the first digit of theUpdateOrder id column
TableSeqNum Three digits numeric code identifying the table being processed by the currently
defined process. It defines the second, third and fourth digits of theUpdateOrder id
column
SourceViewNum Three digits numeric code identifying the v_source view being used by the currently
defined process. It defines the fifth, sixth and seventh digits of theUpdateOrder id
column
SubStepNum Three digits numeric code identifying the substep in the currently defined process. It
defines the last two digits (eight and ninth digits) of theUpdateOrder id column
dbo.UpdateOrder - Execute
There are some rare circumstances that require custom executables to be run within the ETL. This
functionally has been used to call the Canadian HouseHolding (Address parsing) module and for Cost
Allocation processing.
- The last two digits identify the substep defined (identified by the code stored
in the SubstepNum column).
Configuration Defines what the process configuration is – available values are:
- ModelBank: this entry has been added to satisfy Temenos core banking
mapping and/or business rules
- Local: the entry is used during the implementation to update or enhance
Framework or ModelBank functionality
- Framework: this entry has been added by the TFS Framework solution and
it is core banking agnostic
- PBModelBank: Used for Private Banking record definitions
- CampaignAnalytics: Used for Campaign Analytics solutions
- Predictive: Used for Predictive Analytics solution when deployed
TableName The name of the table being populated, e.g. DimAccount.
SourceSystem The name of the source system from which source data is extracted, e.g. ‘BS’ or
‘Budget’. If the data being processed is produced during Analytics ETL and source
system agnostic (or relying on multiple source systems), ‘Framework’ value will be
used in this column. Null values are also acceptable.
TableType Enter the stored procedure name and any parameters e.g.
insighthouseholding.dbo.s_insighthouseholding_update 'Insight', '0'
Action_ Extract.Execute or Transform.Execute or Load.Execute
Enabled_ Defines if the source system in which a process belongs is enabled. Acceptable values
are 1 (for Yes) or 0 (for No) and they are dependent on the Systems table.
Exclude Configure this column to exclude the current process from Analytics ETL. If set to 0,
the process is included and will be executed in the next ETL run. If set to 1, the
process is excluded and will be ignored in the next ETL run.
ETLPhaseNum One digit code for the ETL Phase of the currently defined process. Acceptable values
are 1 to 7. It defines the first digit of theUpdateOrder id column
TableSeqNum Three digits numeric code identifying the table being processed by the currently
defined process. It defines the second, third and fourth digits of theUpdateOrder id
column
SourceViewNum Three digits numeric code identifying the v_source view being used by the currently
defined process. It defines the fifth, sixth and seventh digits of theUpdateOrder id
column
SubStepNum Three digits numeric code identifying the substep in the currently defined process. It
defines the last two digits (eight and ninth digits) of theUpdateOrder id column
dbo.SourceDate
This table shows the current business date that is being processed. This date is used as the snapshot date
for the fact tables.
Column Name Description
SourceDateId Record Id (identity column). Populated automatically.
A copy of the SourceDate table can also be created to show the current business date for different source
system. The name of these source system-specific tables will follow the syntax SourceDate<Source System
Name> e.g. SourceDateBS and they will have the structure below.
Column Name Description
SourceDateId Record Id from the Source Date table
<Source System Name> Current business date for this specific source system
BusinessDate
(e.g. BSBusinessDate)
<Source System Name> End of Month date for this specific source system
EndOfMonth
(e.g. BSEndOfMonth)
dbo.SystemParameters
This table is a Data Manager Custom Table rule that where system parameters that are specific to the
InsightStaging database are defined. As discussed in the Rules Engine section, the types and values for a
given system parameters are mapped. When the rule creation steps are run by an agent job, the
s_CreateRuleGroup procedure creates this table, hence all columns in this table are populated
automatically.
UpdateLog (deprecated)
This log was updated by the ETL stored procedures in releases before R15. It tracked execution times for
extract, transform and load stages, rows processed, type1 or 2 changes, the number of updates and error
messages for each dimension, fact and bridge table. This log was used to review record counts, execution
times or ETL run progress.
Column Name Description
Batch Order by the Batch Desc to get the latest load listed first.
UpdateLogDetails (deprecated)
This was a transactional style log that was updated by the ETL stored procedures, in releases before R15.
Each procedure would write to the UpdateLogDetails at the beginning of the procedure and before any
major steps within the procedure. This log was used to investigate process interruptions or failures, or
specific procedure execution times.
,[Query]
,[Rows]
,[Details]
FROM [InsightStaging].[dbo].[UpdateLogDetails]
where left(query,2) = 's_'
order by updatelogdetailsid desc
Can be used to find the last stored procedures that ran before a load failure, an administrator could scan
the list and find for example s_DimAccount_Transform @UpdateLogId=38, he or she could then run Exec
s_DimAccount_Transform @UpdateLogId=38 rather than the entire ETL while tracking down the issue.
v_Source Views
At least one v_Source view should exist for each Dimension and Fact combination. Each v_source view
should at least have the following fields.
The BS code above corresponds to the Core Banking Source System in the UpdateOrder table. BSXY
corresponds to table type in the UpdateOrder table.
The structure of each v_source view can vary greatly depending on the type of data to be mapped and on
the source system, however, all v_source views should contain the following:
FieldName Description
Source{tablename}Id Eg. SourceAccountId, this the primary key of the
view and should be unique within the view.
Foreign Key References Natural foreign keys so that surrogate foreign keys
can be populated. Eg. SourceCustomerID,
SourceEmployeeID.
System Views
A number of views are required for systems purposes.
v_sourceDate{System}
Each System that is used needs to have this view eg. v_SourceDateBS.
The view returns the current date of the ETL. It should return only one record. ETL will not run for the
system if these dates do not match the dates of the v_souce views.
Field Description
BusinessDate The latest date of the main banking system data source. (Typically).
v_sourceDate
This view returns the current snapshot date. Each v_source view should cross join to this table.
v_MetaData
This is a system view that should not need any configuring or review.
Stored Procedures
dbo.s_InsightStaging_Update (Extract)
Description
This stored procedure represents the main stage of the Analytics ETL job which uses the Source Views to
transform the Temenos Core Banking data structure into the Warehouse Dimensional Model.
Steps
Drop All Tables with the prefixes
o Source
o Staging
o Dim
o Fact
o Bridge
Checks that the particular system that the v_source view being processed belongs to is enabled.
InsightStaging..Systems.
Check the BusinessDate of InsightMasterData.dbo.CurrentDate & dbo.SourceDate against
BusinessDate of InsightSource.BS.SourceDate. Uses v_sourceDate{System} to get the correct date.
o Error if no match
Executes Extract Step for each table
Execute Stored procedure s_{tablename}_extract
Populates (if applicable) the log tables and executes the corresponding stored procs as indicated in
the UpdateOrder table.
Checks to see if previous updates were successful before continuing.
Inputs
@ExecuteExtractSteps – defines if the Extract step should be executed. It should be set to 1 when
Extract phase has to be performed.
@BatchNum – Batch number used to execute the stored procedure. This parameter introduces ETL
Batch control in the stored procedure and can be obtained from a parameter or indirectly from
dbo.Batch table through the function
@TotalThreads – allows to manually assign the number of threads used to execute the stored
procedure (accepts NULLs). Reserved for future development in this stored procedure.
dbo.s_InsightStaging_update (Transform)
Description
This stored procedure executes the ‘Transform’ and ‘Load’ steps defined in the UpdateOrder table
Steps
EXECUTE procedures
o s_Fact{tableName}_Transform
o s_Fact{tableName}_Load
o s_Dim{tableName}_Transform
o s_Dim{tableName}_Load
Inputs
@ExecuteExtractSteps – defines if the Extract step should be executed. It should be set to 0 if the
Extract Phase has already been performed.
dbo. s_Generic_Extract
Description
This is a generic stored procedure internally called by the s_DimXXX_Extract and s_BridgeXXX_Extract
stored procedures.
Steps
Drops and recreate the relevant DimXXX table based on the InsightWarehouse data
dictionary definition
Drops and recreate the relevant FactXXX table based on the InsightWarehouse data
dictionary definition
Creates the StagingXXX table based on the content of the BridgeXXX and SourceXXX tables
Inserts and merges records into the StagingXXX table
Applies business rules assigned to the Extract phase to the StagingXXX table internally calling
the InsightETL.dbo.s_CreateRuleGroup
Updates Logs
Inputs
@StagingEventLogId - Foreign key pointing to dbo.StagingEventLog
@ProcId - The calling procedure's object identifier
@SourceMD – must be set to 1 to do the manual entry (MD) steps for the sourceXXX table if it's
true
@Transpose – must be set to 1 to transpose if it's true
@RecreateFromDW – must be set to 1 to recreate the DimXXX or BridgeXXX table based on the
[$(InsightWarehouse)].dbo.Dim(/Bridge)XXX table
@RecreateFactFromDW – By default set to false. Whether to recreate the FactXXX table based
on the one in [$(InsightWarehouse)] if it's true
@CreateStagingTable bit - To create the staging table from the corresponding Dim(/Bridge)XXX
and SourceXXX tables
@InsertMergeStagingTable - When true, the procedure inserts/merges the records into the
StagingXXX table
@StagingMD – Used to do the manual entry steps for the staging table if set to true
dbo. s_Generic_Transform
Description
This is a generic stored procedure internally called by the s_DimXXX_Transform and
s_BridgeXXX_Transform stored procedures.
Steps
Applies business rules assigned to the Transform phase to the StagingXXX table internally
calling the InsightETL.dbo.s_CreateRuleGroup
Inserts and merges the resulting columns into the StagingXXX table
Populates the DimXXX, FactXXX and BridgeXXX tables
Updates Logs
Inputs
@StagingEventLogId – Id of the corresponding StagingEventLog entry
@ProcId - Id of the calling stored procedure in order to figure out the table name from its name
@PricingUpdate - Optional parameter not needed except for DimAccount; it should be set to true (1)
only when the DimAccount table is processed and the optional Customer Profitability module is installed
@MDStaging - If true, the pre-defined InsightETL business rules are applied against the StagingXXX table
@SqlBeforeInsert - Optional parameter not needed except for DimAccount. If specified, addtional
statements will be carried out right before the @SqlTransform (INSERT INTO)
@CommentsForSqlBeforeInsert - the comments for above statement. For logging purpose only
@SqlInsert nvarchar(max) - Specific INSERT INTO statement for transform, including 3 place holders:
*TABLE*, *ALL*, *ALLDWS*
@AddPK - If true, it will try to add primary key. Avoid this for staging FactXXX because usually such costy
operation has no reasonable usage.
@CustomPKColumn - Optional parameter not needed except for DimDate. If specified and @AddPK =
1, the designated column will be used as the primary key; otherwise, the default column having pattern
'Source*Id' will be used as the PK
@SqlPostInserted - Optional parameter not needed except for DimIndividual. If specified, additional
statements will be carried out right after the primary key is created
@CommentsForSqlPostInserted - The comments for above statement. For logging purpose only
dbo.s_Generic_Load
Description
This is a generic stored procedure internally called by the s_DimXXX_Load, s_FactXXX_Load and
s_BridgeXXX_Load stored procedures.
Steps
Loads the DimXXX, FactXXX and BridgeXXX tables into the InsightWarehouse
Updates Logs
Inputs
@StagingEventLogId – Id of the corresponding StagingEventLog entry
@ProcId - Id of the calling stored procedure in order to figure out the table name from its name
15
In Model bank, only seven stored procedures are based on the existence of business key i.e. 1).
FactAcctTran_Load; 2). FactActivity_Load; 3). FactBrokerTrade_Load; 4). FactEvent_Load; 5).
FactGlAdjmt_Load; 6). FactGlTran_Load; 7). FactOrder_Load. All other procedures are loading facts
exclusively based on BusinessDate
dbo.s_Dim{tablename}_Extract
Description
This kind of stored procedure triggers a number of Stored Procedures as configured by the UpdateOrder
table under the Action_ field. This SP is named with the Dim prefix although it takes care of the source
views into source tables which are not defined as Fact or Dim, it also creates both the Fact and the dim
tables in InsightStaging from the structure defined in InsightWarehouse.
Steps
The procedure will firstly query the Update Order table and Executes procedures as follows based on
the records in UpdateOrder.
Then, the procedure loops through the update order table and executes various stored procedures to
load the dimension.
UpdateOrder.Action_ = Extract’ and
UpdateOrder.Action_ = Extract.Substep
EXECUTE s_Extract_SourceTable
This will extract the v_source{TableName}{TableType} view into
InsightStaging..Source{TableName}{TableType}, where tabletype is the column in UpdateOrder.
For example, the Update Order record partially described in the table below will result in the
v_sourceEmployeeBSDAO view being inserted into the table SourceEmployeeBSAA.
Inputs:
dbo.s_Dim{tablename}_Transform
Descritiption:
Updates Staging Table with custom columns with the execution phase of ‘Transform’ and inserts all data
from staging{tableName} into Dim{tableName} within InsightStaging
Steps
Runs for Transform entries of custom column relating to the particular table being processed.
EXECUTE s_transform_MD
Creates Insert taking the fields from sys.Columns records for the Dim{TableName}
Inserts all data from Staging Table into Dim{tablename} table of InsightStaging
Inputs:
StagingEventLogID – as previously defined
dbo.s_Dim{tablename}_Load
Description:
Simply runs the load Dimension Stored procedure
Steps
EXECUTE s_Load_Dimension
Inputs:
StagingEventLogID – as previously defined
dbo.s_Load_Dimension
Description
Loads data into the final InsightWarehouse tables
Steps
Get Columns from Data Dictionary
Load New Rows
o Look for new rows in staging Dim table
Inputs:
StagingEventLogID – as previously defined
ChangeCase
dbo.s_Fact{tablename}_Extract
Description
Nothing is done here other than logging an extract event for the fact table. The table is extracted in the
Dimension extract.
Steps
Update log details
Inputs:
StagingEventLogID – as previously defined
dbo.s_Fact{tablename}_Transform
Description
Updates Staging Table with custom columns with the execution phase of ‘Transform’ and inserts all data
from staging{tableName} into Fact{tableName} within InsightStaging
Steps
Runs for Transform entries of custom column relating to the particular table being processed.
EXECUTE s_transform_MD
Creates Insert via hard coded joins and takes the fields from sys.Columns records for the
Dim{TableName}
NB: These JOIN statements can be amended if for example an extra ID was needed for a Dim
table, it may need a new join to be able to show that Id in the table.
Inserts all data from Staging Table into Dim table of InsightStaging
Inputs
StagingEventLogID – as previously defined
dbo.s_Fact{tablename}_Load
Description
Runs the load Dimension Stored procedure
Steps
Delete data from Warehouse Fact table for current business date
o This prevents the reprocessing of a batch from causing duplicates
EXECUTE s_Load_Fact
o To load the Warehouse tables
Inputs
StagingEventLogID – as previously defined
dbo.s_Load_Fact
Description
Loads data into the final InsightWarehouse tables
Steps
Get Columns from Data Dictionary
Load New Rows for Latest Business Date from staging Fact to Warehouse Fact
Inputs:
StagingEventLogID – as previously defined
ChangeCase
dbo.s_Extract_SourceTable
Description
Drops and creates the Source table from the source view.
Steps
Checks if table exists and drops if needed
Check for duplicates using PK passed as input variable
Create Primary Key on Source Table
Inputs
StagingEventLogID – as previously defined
dbo.s_Transform_CreateTableFromDW
Description
Recreates the Dim/Fact tables from the InsightWarehouse tables
Steps
Drop InsightStaging Dim/Fact table if exists
Create table from the InsightWarehouse equivalent
Add Primary Key to the field titled {tablename}ID. AccountID on FactAccTran is the exception to this.
Drop Computed Columns
Inputs
Table Name – name of the table to be created
Action – input parameter for logging purpose only, should be left null
dbo.s_Transform_CreateStagingTable
Description
Creates Staging Tables within InsightStaging
Steps
Drop InsightStaging, staging{tablename}, if exists
Retreive all Columns from the equivelant source{tablename}
Retreive all Columns from the equivelant Dim/Fact/Bridge{tablename}
Combine all field lists to Create staging{tableName}
Inputs
Staging Table – name of the table to be created
Action – input parameter for logging purpose only, should be left null
dbo.s_SourceTable_OutOfRangeUpdate
Description
Checks for datatype/size conflicts between source and target table, and reports in the logging table.
Page 182 | 335
Advanced Analytics Platform Technical Guide
Inputs
Source Table – name of the source table to be checked
dbo.s_StagingTable_update
Description:
This procedure populates Staging Table with new records from the Source Table. It inserts only new data
into the Staging{tableName} from Source{tableName} where the source{tableName}ID is not already in
the Staging table
Inputs
Staging Table – name of the staging table to be populated
dbo.s_StagingTable_merge
Description
This procedure updates the staging table with latest versions of records. It updates records in
Staging{tableName} from Source{tableName} where the source{tableName}ID of the source table
matches the staging table source{tableName}ID.
Inputs
Staging Table – name of the staging table to be populated
dbo.s_Transform_CreateTableFromSS
Description
Caters for the creation of Custom Tables business rules
Steps
Drops Target Table table if existing (TargetTable = CustomColumn field from
InsightMastData.CustomColumn)
Creates target table from Source Table using a filter. See below for actual field from
InsightMasterData.CustomColumn.
o Target Table = CustomColumn
o Source Table = SourceColumn
o Filter = SourceDataFilter
Inputs
Source Table Name – name of the source table
dbo.s_CustomCode_Execute
Description
Runs the custom code defined in a InsightETL Dataset business rule. i.e “update table set field = value”.
See InsightETL chapter for details.
Steps
Creates a user with permissions to insert, delete, update, alter on the dim/fact/bridge/staging/source
tables of InsightStaging
EXECUTE the custom code passed (taken from the InsightETL.dbo.RuleDefinition table for Dataset
records)
Inputs:
CustomCode – text storing the custom code to be executed
dbo.s_Bridge{tablename1}{tablename2}_Extract
Same structure as the stored procedure to extract a fact table
dbo.s_Bridge{tablename1}{tablename2}_Transform
Same structure as the stored procedure to transform a fact table
dbo.s_Bridge{tablename1}{tablename2}_Load
Same structure as the stored procedure to load a fact table
dbo.s_EventLog_Add
Description
This stored procedure adds new rows to the EventLog table.
dbo.s_EventLogDetails_Add
Description
This stored procedure adds new rows to the EventLogDetails table.
dbo.s_transform_MD (deprecated)
Description
In Pre-R18 releases, this stored procedure (in the InsightStaging database) updated staging tables and fact
tables based on the MasterData stored in the CustomColumn and CustomValue tables. Specifically, it
created custom Column fields in staging tables and staging FactTables.
Steps
For Each Custom Column record, execute the relevant Stored Procedure depending on Custom column
Type.
Type 0
Type 1
Type 3
EXECUTE s_CustomCode_Execute
Type 2
Inputs:
TableName – Name of the InsightStaging table to be updated (e.g. StagingAccount)
ETL Phase – ETL phase in which the update should take place (i.e..Extract or Transform)
e.g.: Exec s_transform_MD ‘StagingAccount’, ‘Extract’
Configuration
The items that require configuration are:
At least one view needs to exist for each dimension and fact table, at least one view populates both the
dimension and fact table.
For instance, if FactAccount and DimAccount need to be populated then a view called v_SourceAccountXX
needs to be created, where XX signifies the name of the source system from which Account data is
extracted. So in many cases, the v_SourceAccountBS view will populate account records from the banking
system (BS) into DimAccount and FactAccount.
Fact and dimension tables for the same object, in this example DimAccount and FactAccount, can be
populated by different sub-systems (e.g. Core banking modules) in the same source system.
If for example, some account records in a specific Analytics installation come from the main account module
in Core banking, some come from the MM (Money Market) module and others come from the LD module
(Loan and Deposits), then three v_Source views need to be created and they would be called
v_SourceAccountBS, v_SourceAccountBSMM, and v_SourceAccountBSLD. It should be noted that unions
should never be done inside a v_Source view.
V_source views for second level subsystems can also exist – e.g. the Core banking module called AA
(Arrangement Architecture) is structured into AA Accounts, AA Deposits, AA Lending etc. Therefore, if one
or more of the AA module subproducts are enabled in the current Analytics installation, the corresponding
v_source views need to be created e.g. v_SourceAccountBSAA_Accounts,
v_SourceAccountBSAA_Deposits, v_SourceAccountBSAA_Lending etc.
Similarly for another source system that is going to populate DimAccount and FactAccount, say the Budget
SourceSystem, a v_source view called v_SourceAccountBudget would be created.
Each v_SourceView should contain a natural key field called Source{TableName}ID, where {TableName}
would be Account in the example above with DimAccount and FactAccount. This key must be unique for
all of the views for a particular staging table such as stagingAccount in this example.
Adding a v_SourceView
For example A new wealth management subsystem (Abbreviation = WM) has been added to the banking
system (SourceSystem = BS), account records from this subsystem need to be added to DimAccount and
FactAccount.
Step 1
Ensure tables required are available in InsightSource. If not configure InsightImport (storing data extracted
from Temenos Core banking only), and InsightLanding appropriately so that the table is transferred to
InsightSource. See InsightImport Configuration and InsightLanding Configuration sections for more details.
Step 2
E.g, v_sourceAccountBSWM.
Select
D.BusinessDate
,A.Branch_Co_Mne + ':'+ WM.[@Id] as SourceAccountID
,B.[@ID] as SourceCustomerID
,A.EmpID as SourceEmployeeID
,A.Amount as Balance
From
InsightSource.BS.tableWM WM
on WM.ID = AB.ID
Cross Join v_SourceDate D
Step 3
Add new entries to update order table so that v_source view data is added to the appropriate Dimension
and Fact tables. See Update Order table description.
Update Order
The UpdateOrder table controls which dimensions and facts are processed, and which v_Source views are
used to populate them.
Each dimension and fact have corresponding entries in the UpdateOrder table which controls the Extract,
Transform and Load processes.
We can see a partial sample of the Update order entries used to populate DimAccount in the table below:
During the Extract Process, the above records will result in the following tables being created:
InsightStaging.SourceAccountBSWM
InsightStaging.SourceAccountBudget
These two tables above will then be inserted into a single StagingAccountTable.
Dimension Transformation and Load processes are switched on as shown in the partial representation of
Update order entries in the following table:
(Nothing needs to be done if this is an existing dimension being loaded by other v_source views)
The Fact Table Transformation and Load processes are switched on as shown in the table below (again,
not all columns are shown):
(Nothing needs to be done if this is an existing Fact table being loaded by other v_source views)
Fields from the v_source view are added to either the Fact or Dimension table depending on their
description in the InsightWarehouse..DataDictionary table.
Adding Systems
New systems need to be enabled in InsightStaging..Systems. Ensure that Enable_ is set to 1 for all active
source systems.
The table below shows a partial example of enabled BS and Budget source systems in the Systems table
We will now discuss how to add tables from a new source system.
Let us assume, for example, that a new Source system which contains customer account data is required
to be added to the Insight Data Warehouse. The data is at the same level of granularity as the Banking
System. The data is also snapshot type data. The system abbreviation will be “INS”.
Preparation
You should prepare Extracts from Source System. These extracts should be a snapshot of Customer and
Account details and any incremental transactions.
Then you should map the extracted data to existing InsightWarehouse objects (using InsightStaging
v_source views and InsightWarehouse Data Dictionary table).
InsightLanding
Assuming the source system has two tables – Customer and Account.
Update Landing ExtractList and ExtractSourceDate tables with new data source tables and date.
ExtractList
The table below shows a partial example of Extract list configuration to add tables from a new source
system called INS.
ExtractSourceDate
The table below shows a partial example of Extract Source Date configuration to add tables from a new
source system called INS.
BSDateSQL select @bsdate = MISDate from T-SQL statement which extract the
INSData.dbo.{table that returns a snapshot current date for the source system
date}
Configuration Local Configuration defines the information
source for the current row – available
values are:
- ModelBank: this entry has been
added to satisfy Temenos core
banking mapping and/or business
rules
- Local: the entry is used during
the implementation to update or
enhance Framework or
ModelBank functionality
- Framework: this entry has been
added to the TFS Framework
solution and it is core banking
agnostic
- PBModelBank: Used for Private
Banking record definitions
- CampaignAnalytics: Used for
Campaign Analytics solutions
- Predictive: Used for Predictive
Analytics solution when deployed
InsightSource:
You need to create schema a new schema for the data source in InsightSource. This should be the same
as the SourceName in InsightLanding..ExtractList.
Update s_InsightSource_Synchronize_Schema and copy to post-deploy. This is so that when history is re-
processed, the v_source views for the new source system will not fail when run for dates before the data
existed.
For example, add this to s_InsightSource_Synchronzie_Schema
END
InsightStaging
You should add the new v_source views, v_SourceCustomerINS, and v_SourceAccountINS.
Then you should add new corresponding extract.substep and add.substep records to the Update Order
table.
A partial example is provided in the table below:
InsightWarehouse
Overview
The Insight Warehouse database is the end point for all data to be stored for analytical reporting and ad
hoc analysis. It stores multiple dates of data for multiple source systems in a star schema dimensional
model based on Kimball data warehousing methodology. This model is optimized for query performance
and the storage of large amounts of both transactional and snapshot data.
Temenos Analytics in previous releases loaded data in Dimension and Fact tables using traditional database
storage of tables in rowstore format. In rowstore data is logically organized as a table with rows and
columns, and then physically stored in a row-wise data format.
As data volumes increased storing data in InsightWarehouse in row store format translated to higher
amounts of disk space required.
Microsoft SQL Server introduced columnstore index which is a technology for storing, retrieving and
managing data by using a columnar data format. This new type of index stores data column-wise instead
of row-wise.
Temenos Analytics has adopted and implemented columnstore index technology in InsightWarehouse. The
rowstore primary key index has been replaced with a clustered columnstore index on all Bridge and Fact
tables.
The main benefits of using columnstore index in InsightWarehouse is high compression rates and high
query performance gains over traditional row-oriented storage. Requirements for disk space have been
dropped by up to 90%.
Metadata Driven The data warehouse is controlled and populated based entirely on
the metadata contained in the data dictionary configuration table.
This makes it easy to add additional columns to tables and the
abstraction views and control the slowly changing dimension
attributes of dimension table columns.
Technical Details
Architecture
In the figure below, we can see how the InsightSource database fits in the Advanced Analytics platform’s
architecture.
Data Model
The diagram below offers a representation of the InsightWarehouse data model.
Role Playing Dimension are dimensions which can play different roles in a fact table depending on the
context – e.g. times appear in most types of analysis because business activities happen in a timeframe
and objects exist in time. Time is almost always used when calculating and evaluating measures in a fact
table and the time information may be required in a different format such as ‘Time on Day’ or ‘Time in
minutes’ or ‘Time with AM or PM’. To define different views of the same dimension, we use here a particular
kind of Dimension tables called Role Playing Dimension tables and that are represented in pink.
Transaction Fact tables are represented in purple. The grain associated with this kind of fact table is
usually specified as "one row per line in a transaction", e.g., every line on a receipt. Typically a transactional
fact table holds data of the most detailed level, causing it to have a great number of dimensions associated
with it
There are also tables in charge of representing business performance at the end of each regular, predictable
time period. These are called Periodic Snapshot Fact tables, they are here represented in blue and, as
their name suggest, they are used to take a "picture of the moment", where the moment could be any
defined period of time, e.g. a performance summary of product sales over the previous month. Any periodic
snapshot table is dependent on an underlying transaction fact table, as it needs the detailed data held in
the transactional fact table in order to deliver the chosen performance output
Finally, our diagram shows us a set of Bridge tables, represented in orange. A bridge table sits between
a fact table and a dimension table and is used to resolve many-to-many relationships between a fact and
a dimension. Bridge table will contain only two dimension columns, key columns in both dimensions . E.g.
Let us suppose there are two dimensions Customer and Account. A Bridge table can be created by joining
the two tables Customer and Account using dimension keys (e.g. the Customer Number on the Account
Table). Please note that Bridge tables are fact-less, with no measures.
PK,FK1 AccountId PK EmployeeId PK BranchId PK AddressId PK CustomerId PK,FK7 CustomerId PK OrganisationId PK ProductId
PK,FK11 BusinessDate PK,FK8 BusinessDate
I1 SourceEmployeeId I1 SourceBranchId I2,I1 SourceAddressId I2,I1 SourceCustomerId I2,I1 SourceOrganisationId I2,I1 SourceProductId
FK4 AddressId FK1 BranchID I1 Active I1 Active I1 Active FK6 BranchId OrganisationType SourceSystem
FK3 AddressId2 I1 Active Added Added Added FK5 BranchId2 SourceSystem I1 Active
FK5 BranchId Added Modified Modified Modified FK9 EmployeeId I1 Active Added
FK6 BranchId2 Modified UpdateReason UpdateReason UpdateReason FK10 EmployeeId2 Added Modified
FK7 BranchId3 UpdateReason BranchName AddressLine1 Age FK2 AddressId Modified UpdateReason
FK8 CurrencyID Division BranchNum AddressLine2 AgeGroup FK4 AddressId2 UpdateReason DeletedSCD2
FK9 CurrencyID2 Department CurrencyCompany AddressLine3 AnnualIncomeGroup FK3 AddressId3 DeletedSCD2 SourceApplication
FK10 CustomerId EmplFullName CurrencyMnemonic AddressLines BirthDate FK1 AddressId4 SourceApplication AccountClass
FK14 EmployeeId DimAccount EmplLastName LeadCompany AddressType CreditScore FK11 IndividualId IsActive FactProduct AssetsHierarchylevel1
FK16 EmployeeId2 EmployeeNum LocalCurrency City CreditScoreCompany FK12 OrganisationId OrganisationCodeLevel1 AssetsHierarchylevel2
PK AccountId EmployeePosition Country FK13 SystemSecurityId OrganisationCodeLevel2 PK,FK5 ProductId AssetsHierarchylevel3
FK18 EmployeeId3 PresentationCurrency CreditScoreDate
EmployeeStatus PostalCode CustMonthlyCosts OrganisationCodeLevel3 PK,FK3 BusinessDate AssetsHierarchylevel4
FK12 EmployeeId4 Region CreditScoreGroup
I1,I2 SourceAccountId EmployeeType Province CustMonthlyFeeIncome OrganisationCodeLevel4 AssetsHierarchylevel5
FK15 EmployeeId5 SourceApplication CustClosedDate
SourceSystem EmplStartDate SourceApplication CustMonthlyNetIncome OrganisationCodeLevel5 FK1 BranchId AssetType
FK20 GLId I1 SCD2HashCI CustClosedThisMonth
IsActive NetworkUserName I1 SCD2HashCI CustMonthlySpreadIncome OrganisationCodeLevel6 FK2 CurrencyId AssetTypeCode
FK21 LimitId SCD2HashCS CustClosedToday
I1 Active ParentLevel1 SCD2HashCS DepsBalance OrganisationDescLevel1 FK4 EmployeeId CatAvailDate
FK2 LinkedLoanAcctId CustNewThisMonth
Added ParentLevel1Code ExternalTotalAssets OrganisationDescLevel2 FK7 ThirdPartyId Category
FK22 OrganisationId CustNewToday
Modified ParentLevel2 ExternalTotalLiabilities OrganisationDescLevel3 FK8 ThirdPartyId2 CatExpiryDate
FK23 PortfolioId CustomerClass
UpdateReason ParentLevel2Code LoanAuthorized OrganisationDescLevel4 FK9 ThirdPartyId3 Classification
FK25 ProductId CustomerIndustry
DeletedSCD2 ParentLevel3 LoanBalance OrganisationDescLevel5 FK6 SystemSecurityId Group_
FK24 ProductId2 CustomerNum
AccountClass ParentLevel3Code MonthlyNumNonClassifiedTrans OrganisationDescLevel6 CurrencyId2 IsActive
FK19 ThirdPartyId CustomerSector
AccountNum ParentLevel4 NonClassifiedBal I1 SCD2HashCI IssueDate
FK13 ThirdPartyId2 CustomerStatus
AmortMatureDate ParentLevel4Code NumAccounts SCD2HashCS ProdGroupCode
FK17 ThirdPartyId3 CustomerTarget
Authorized BridgeAccountCustomer
FK27 WMId ParentLevel5 CustomerType NumNonClassified ProdGroupDesc
BalanceGroup ParentLevel5Code NumProdsAndServices ProdGroupType
FK26 SystemSecurityId PK,FK3 BusinessDate CustProfitGroup
BrokerName SourceApplication NumProduct1 ProdLineCode
AvailableFunds PK,FK1 AccountId CustProfitStatus
Category I1 SCD2HashCI NumProduct10 ProdLineDesc
Balance PK,FK2 CustomerId CustSourceSystem
Classification SCD2HashCS NumProduct2 Product
BookValue PK JointType CustStartDate
ClosedDate NumProduct3 ProductCategory
DelinquentAmount DeceasedDate
Currency NumProduct4 ProductCategoryCode
DelinquentDays EmployeeId DefaultPhone
DelqDayGroup NumProduct5 ProductCode
DisburseAmount IsSigner DepsBalGroup
DisburseDate NumProduct6 ProductDesc
ForeignCurrencyBal FirstName
ExternalRiskCode NumProduct7 ProductStatus
HoldsTotal FullName
ExternalRiskDesc NumProduct8 ProductType
InterestAccrued Gender
FeePlan NumProduct9 RiskCountry
InterestRate HasNonClassified
FixOrVar NumProducts RiskCountryCode
InterestRateVariance HasProduct1
FTPStartDate NumService1 RiskCurrency
IntIncomeOrExpense HasProduct10
Group_ NumService2 RiskCurrencyCode
LastPaymentDate HasProduct2
InterestPaidFreq NumService3 RiskLevel
MonthlyAmortizedSalesNetIncome HasProduct3
InterestRateGroup DimDate NumService4 SourceBranchID
MonthlyAvgBal HasProduct4
InterestRateIndex NumService5 SubAssetType
MonthlyCosts PK BusinessDate HasProduct5
IsClosedThisMonth NumServices SubAssetTypeCode
MonthlyFeeIncome HasProduct6
IsClosedToday Product10Balance I1 SCD2HashCI
MonthlyNetIncome DateFormat HasProduct7
IsNewThisMonth Product1Balance SCD2HashCS
MonthlySalesNetIncomePaid Year_ HasProduct8
IsNewToday Product2Balance
MonthlySpreadIncome Month_ HasProduct9
IsOverdrawn Product3Balance
MonthlyTrailerNetIncome YearMonth HasService1
IsSold Product4Balance
NextPmtDueDate YearMonthName HasService2
LoanCode Product5Balance
OverdrawnAmount YearMonthDayName HasService3
LoanDescription Product6Balance
ScheduledPmtAmt MonthName HasService4
LoanToValue Product7Balance
SpreadRate MonthNameYear HasService5
MaturityDate Product8Balance
TransferRate WeekDay IsEmployee
OriginalLoanAmount Product9Balance
WeekDayOrder LastName
OriginalStartDate ShareOfWalletPercentage
Day_ LoanBalGroup
PmtFreq SourceSystemSecurityId
Quarter MaritalStatus
PriCollCode TotalBalance
EndOfMonth MiddleName
PriCollDesc CustLoyaltyScore
I1 CurrentDate NationalIdentityNum DimCampaign DimChannel FactOpportunity
ProductCode AnnualBonus
HasGoodData NonResident
DimWM ProductDesc PK CampaignId BridgeCampaignChannel PK ChannelId PK OpportunityId
ShowData NumProductsGroup
ProductType
PK WMId EndOfMonthName Occupation
PurposeCode I1,I2 SourceCampaignId PK,FK3 BusinessDate I2,I1 SourceChannelId SourceOpportunityId
CurrentDateName Religion
PurposeDesc I1 Active PK,FK1 CampaignId I1 Active FK1 CustomerId
I1 SourceWMId CurrentWeek Residence
ReasonClosed Added PK,FK2 ChannelId Added FK2,I1 BusinessDate
I1 Active BSEndOfMonth SourceApplication
ReviewDate BridgeCustomerCollateral Modified Modified FK3 SystemSecurityId
Added FinancialYear SourceEmployeeId
RiskCode UpdateReason UpdateReason
Modified FinancialQuarter Tenure PK,FK3 BusinessDate
RiskDescription I1 SCD2HashCI I1 SCD2HashCI
UpdateReason FinancialMonth TenureGroup PK,FK2 CustomerId
SoldPoolNum SCD2HashCS SCD2HashCS
FundFamily FinancialWeek TotalBalGroup PK,FK1 CollateralId
SourceApplication
FundType CustAttritionRisk
SourceCustomerID
InvestmentType CustLoyaltyGroup
SourceEmployeeId DimProgram DimProfile FactOpportunitySuccess
LoadTypeDesc EmailAddress
SourcePortfolioId
Symbol I1 SCD2HashCI PK ProgramId PK ProfileId PK OpportunitySuccessId
SourceProductId
I1 SCD2HashCI SCD2HashCS PK,FK5 BusinessDate
SourceProductId2 BridgeOpportunityProfile
SCD2HashCS AnnualBonusGroup I1,I2 SourceProgramId I2,I1 SourceProfileId
StandardizedTerm
StartDate I1 Active I1 Active PK,FK1 BusinessDate SourceOpportunitySuccessId
StatementDesc Added Added PK,FK3 OpportunityId FK7 OpportunityId
Status_ BridgeCustomerLimit BridgeCustomerAsset Modified Modified PK,FK2 ProfileId FK4 CustomerId
DimGL UpdateReason UpdateReason FK1 AccountId
SystemSource
T24ProductGroup PK,FK2 BusinessDate PK,FK3 BusinessDate IsActive I1 SCD2HashCI FK2 CampaignId
PK GLId PK,FK1 CustomerId PK,FK2 CustomerId
Term I1 SCD2HashCI SCD2HashCS FK3 ChannelId
TermInMonths PK,FK3 LimitId PK,FK1 AssetId SCD2HashCS FK6 SystemSecurityId
I1 SourceGLId
GLSourceSystem TermToMaturity
I1 Active TermToMaturityGroup
Added TermUnit
Modified I1 SCD2HashCI
UpdateReason SCD2HashCS
GLAssetType LDCategoryProduct BridgeCustomerPortfolio BridgeCustomerSDB
GLBSAttribute1
PK,FK3 BusinessDate PK,FK3 BusinessDate
GLBSAttribute10
PK,FK2 CustomerId PK,FK2 CustomerId
GLBSAttribute2
PK,FK4 PortfolioId PK,FK4 SDBId
GLBSAttribute3
GLBSAttribute4 BridgeAccountSDB
FK1 BranchId FK1 BranchId
GLBSAttribute5 PK,FK3 BusinessDate
GLBSAttribute6 PK,FK1 AccountId
GLBSAttribute7 PK,FK4 SDBId
GLBSAttribute8
GLBSAttribute9 FK2 BranchId BridgeCustomerCard FactCollateral
GLCurrency
GLDescription PK,FK3 BusinessDate PK,FK3 CollateralId
GLInsightAttribute1 PK,FK1 CardId PK,FK4 BusinessDate
GLInsightAttribute10 BridgeAccountCollateral PK,FK2 CustomerId
GLInsightAttribute2 FK2 AssetId
PK,FK4 BusinessDate FK4 EmployeeId
GLInsightAttribute3 FK1 AddressId
PK,FK1 AccountId
GLInsightAttribute4 CollAmount
PK,FK3 CollateralId
GLInsightAttribute5 ExecutionValue
GLInsightAttribute6 MaximumValue
FK2 AddressId
GLInsightAttribute7 SystemSecurityId
CollPrctAlloc
GLInsightAttribute8 DimActivity DimEvent DimOrder DimSecurityPosition DimThirdParty
GLInsightAttribute9
PK ActivityId PK EventId PK OrderId PK SecurityPositionId PK ThirdPartyId
GLNum
GLThirdPartyAttribute1
I2,I1 SourceActivityId I2,I1 SourceEventId I2,I1 SourceOrderId I2,I1 SourceSecurityPositionId I2,I1 SourceThirdPartyId
GLThirdPartyAttribute10
SourceSystem SourceSystem SourceSystem SourceSystem SourceSystem
GLThirdPartyAttribute2 DimCollateral DimAsset DimLimit IsActive IsActive I1 Active I1 Active I1 Active
GLThirdPartyAttribute3
I1 Active I1 Active Added Added Added
GLThirdPartyAttribute4 PK CollateralId PK AssetId PK LimitId
Added Added Modified Modified Modified
GLThirdPartyAttribute5
Modified Modified UpdateReason UpdateReason UpdateReason
GLThirdPartyAttribute6 I1 SourceCollateralId I1 SourceAssetId I1 SourceLimitId
UpdateReason UpdateReason DeletedSCD2 DeletedSCD2 DeletedSCD2
GLThirdPartyAttribute7 DimTime I1 Active IsActive IsActive
DeletedSCD2 DeletedSCD2 SourceApplication SourceApplication BranchID
GLThirdPartyAttribute8 Added I1 Active I1 Active
SourceApplication SourceApplication LeadCompany I1 SCD2HashCI SourceApplication
GLThirdPartyAttribute9 DimCard DimCurrency PK Time_ Modified Added Added
ActivityHierarchylevel1 I1 SCD2HashCI OrderCode SCD2HashCS DefaultPhone
PLCategory UpdateReason Modified Modified
PK CardId PK CurrencyId ActivityHierarchylevel2 SCD2HashCS OrderDealStatus LeadCompany
SourceApplication TimeFormat ApplicationRef UpdateReason UpdateReason
ActivityHierarchylevel3 OrderGroupcode Residence
SourceCustomerID TimeOfDay AppraisalDate DeletedSCD2 DeletedSCD2
I1 SourceCardId I1 SourceCurrencyId ActivityHierarchylevel4 OrderIsSTP SourceAddressId
SystemSource TimeOfDayOrder CollateralCode AccountId ApprovalDate
I1 Active IsActive ActivityHierarchylevel5 OrderLimitType SourceBranchId
GLPrevAssetType AmPm CollateralDesc CollateralId ExpiryDate
Added I1 Active ActivityType OrderMarketType ThirdPartyIndustry
PLConsolKey Hour CollateralType SourceAccountId IsAvailable
Modified Added ActivityTypeCode OrderTransCode ThirdPartyName
PLResidence HourAmPM CollCurrency SourceApplication IsRevolving
UpdateReason Modified SourceAccountId OrderTransDesc ThirdPartyNum
PLSector Minutes CollExpiryDate SourceCollateralId IsSecured
ATMOffline UpdateReason SourceBranchID OrderType ThirdPartySector
PlTerm MinutesAMPM CollInsuranceCompany I1 SCD2HashCI LimitCurrency
ATMOnLine DeletedSCD2 SourceCustomerID SourceAccountId ThirdPartyStatus
I1 SCD2HashCI TimeRange15Min CollInsuranceExpiryDate SCD2HashCS LimitFixOrVar
CardNumber CurrencyCodeFrom SourceEmployeeId SourceBranchId ThirdPartyTarget
SCD2HashCS CollInsurancePolicyNum LimitProdCode
CardStatus CurrencyCodeTo SourceEmployeeId2 SourceCustomerId ThirdPartyType
CollInsuranceType LimitProdDesc
CardType CurrencyMarketCode SourceEmployeeId3 SourceEmployeeId I1 SCD2HashCI
CollPropertyCode BridgeLimitCollateral FactLimit LimitRef
ExpiryDate CurrencyMarketDescription SourceEventId SourceEventId SCD2HashCS
CollStartDate LimitStartDate
IssueDate SourceApplication PK,FK3 LimitId SourcePortfolioId SourceProductId
FactGLAdjmt FactGL CollStatus PK,FK1 BusinessDate ReviewFrequency
POSOffline I1 SCD2HashCI PK,FK2 BusinessDate SourceProductId SourceSecurityPositionId
SourceApplication PK,FK2 LimitId SourceApplication
PK GLAdjmtId PK,FK9 GLId POSOnline SCD2HashCS SourceThirdPartyId SourceThirdPartyId
I1 SCD2HashCI I1 SCD2HashCI
PK BusinessDate SourceApplication FK1 CustomerId SourceThirdPartyId2 SourceThirdPartyId2
SCD2HashCS LimitPrctAlloc SCD2HashCS
BusinessDate I1 SCD2HashCI InternalAmount SourceThirdPartyId3 SourceThirdPartyId3
GLAdjmtPostedDate FK3 BranchId SCD2HashCS LimitAmount I1 SCD2HashCI SystemSource
SourceGLAdjmtId FK2 BranchId2 MaxLimitAmount SCD2HashCS I1 SCD2HashCI
FK2 GLId FK1 AccountId SystemSecurityId SCD2HashCS
SourceGLId FK7 CustomerId UtilisedAmount
SourceGLTranId FK8 EmployeeId
FK1 EmployeeId FK4 CurrencyID
SystemSecurityId FK6 CurrencyID2 FactCurrency
FactActivity FactEvent FactOrder FactSecurityPosition FactBrokerTrade
GLAdjmtAmount FK5 CurrencyID3 PK CurrencyId
GLAdjmtForeignCcy FK10 OrganisationID BridgeAccountPortfolio PK,FK3 ActivityId PK,FK5 EventId PK,FK14 SecurityPositionId PK BrokerTradeId
PK,FK1 BusinessDate PK,FK11 OrderId
GLAdjmtForeignAmt FK11 SystemSecurityId PK,FK7 BusinessDate
PK,FK3 BusinessDate
CurrencyID EmployeeId2 BranchId FK12,I1 BusinessDate FK4,I1 BusinessDate FK7,I1 BusinessDate I1 BusinessDate
PK,FK1 AccountId
CurrencyID2 GLAmount MidRevalRate FactIndividual FK1 AccountId FK1 BranchId FK1 AccountId FK2 AccountId SourceBrokerTradeId
PK,FK4 PortfolioId DimIndividual
GLForeignAmount MultiplierRate FK2 AccountId2 FK3 CurrencyID FK2 ActivityId FK1 AccountId2 FK1 AccountId
PortfolioId PK,FK7 IndividualId PK IndividualId FK4 BranchId FK2 CurrencyID2 FK3 AddressId FK2 ActivityId
FK2 BranchId RevalRate FK3 BranchId
ProductId PK,FK3 BusinessDate FK5 BranchId2 FK6 ProductId FK4 AddressId2 FK3 BranchId
FK5 CurrencyId
sourceIndividualId FK6 CardId FK9 ThirdPartyId FK4 CurrencyId2 FK6 BranchId FK4 CurrencyId
FK2 PrimaryCustomerId Active FK7 CardId2 FK8 ThirdPartyId2 FK5 BranchId2 FK5 CurrencyId2
FK6 CustomerId
FK1 BranchId Added FK8 CurrencyId FK7 SystemSecurityId CurrencyID FK6 CustomerId
DimPortfolio FK8 EmployeeId
FK6 EmployeeId Modified FK9 CurrencyId2 SourceEventId CurrencyID2 FK7 CustomerId2
FK9 EventId
PK PortfolioId FK5 EmployeeId2 UpdateReason FK10 CustomerId CurrencyID3 FK8 EmployeeId
FK10 GLId
FK4 EmployeeId3 FK11 CustomerId2 CustomerId FK9 EmployeeId2
FK13 ProductID
I1 SourcePortfolioId SystemSecurityId FK14 EmployeeId FK10 EmployeeId FK11 OrderId
BridgeAccountCard FK12 PortfolioID
I1 Active FK15 EmployeeId2 FK14 SecurityPositionID FK9 EmployeeId2 FK10 GLId
Added PK,FK3 BusinessDate FK13 EmployeeId3 FK17 ThirdPartyId FK8 EmployeeId3 FK12 PortfolioId
Modified PK,FK1 AccountId FK16 EventId FK16 ThirdPartyId2 GLId FK13 ProductId
UpdateReason PK,FK2 CardId DimSDB FK17 GLId FK15 ThirdPartyId3 FK11 LimitId FK16 ThirdPartyId
PortfolioType FK18 LimitId OrderDate FK12 PortfolioId FK15 ThirdPartyId2
PortfolioCategory FK4 EmployeeId PK SDBId FK13 ProductId FK17 ThirdPartyId3
FK19 OrderId OrderTime
PortfolioClosedDate FK21 PortfolioId OrderBuyingPower FK18 ThirdPartyId FK14 SystemSecurityId
PortfolioDesc SourceSDBId FK17 ThirdPartyId2 BrokerTradeDate
FK20 PortfolioId2 OrderLimitExpDate
PortfolioNum Active FK16 ThirdPartyId3 SourceApplication
FK23 ProductId OrderLimitPrice
PortfolioStartDate Added FK19 WMId SourceProductID
FK22 ProductId2 OrderMaturityDate
PortfolioStatus Modified FK15 SystemSecurityId BrokerTradeTime
FK25 SecurityPositionId OrderNominal
SourceApplication UpdateReason BranchId3 SourceThirdPartyId
FK24 SecurityPositionId2 OrderValueDate
I1 SCD2HashCI BoxNum SourceThirdPartyId2
FK26 SystemSecurityId SourceOrderId
SCD2HashCS DimSystemSecurity Fee SourceThirdPartyId3
FK27 ThirdPartyId
KeyNum
PK SystemSecurityID FK29 ThirdPartyId2
NextDueDate
FK28 ThirdPartyId3
Size
I1 SourceSystemSecurityID LinkedLoanAcctID
StartDate
Description SourceActivityId
Status
FactPortfolio Added
PK PortfolioId I1 Active
PK,FK2 BusinessDate UpdateReason
Modified
FK1 CustomerId BranchRole FactSDB
SystemSecurityId CustSensitivityRole
DomainName PK SDBId
AccountBranchRole PK,FK1 BusinessDate
I1 SCD2HashCI
SCD2HashCS BranchID
CustomerID
FeeAccountID
Abstraction Views
v_BrokerTrade v_Activity v_Order v_DAO v_Product v_SecurityPosition
v_Customer v_Account v_AcctTran v_GL v_GLTran v_CustomerLimit v_Limit v_Collateral v_Channel
AccountId AccountId AccountId DAOId BranchId AccountId
BranchId AccountId AccountId AccountID AccountId BusinessDate BusinessDate BusinessDate ChannelId AccountId2 ActivityId SourceDAOId BusinessDate AddressId
ActivityId
BusinessDate BranchId AcctTranId BranchId ActivityId CustomerId CustomerId CollateralId SourceChannelId ActivityId BranchId DAODescLevel1 CurrencyId AddressId2
BranchId
CustBranchId BranchId2 ActivityId BusinessDate BranchId LimitId LimitId SourceApplication BranchId BusinessDate DAODescLevel2 CurrencyId2 BranchId
BrokerTradeDate
CustEmplId BusinessDate BranchId CurrencyId BusinessDate SourceApplication SourceCollateralId BranchId2 CurrencyId DAODescLevel3 EmployeeId BranchId2
BrokerTradeId
CustomerId CurrencyId CardId CurrencyId2 CustomerId SourceLimitId AppraisalDate BusinessDate CurrencyId2 DAODescLevel4 ProductId BranchId3
BrokerTradeTime
EmployeeId CurrencyId2 CustomerId CurrencyId3 EmployeeId ApprovalDate CollateralCode CardId CustomerId DAODescLevel5 SourceApplication BusinessDate
v_CustomerAsset BusinessDate
OrganisationId CustomerId EmployeeId CustomerId EventId LimitStartDate CollateralDesc CardId2 EmployeeId DAODescLevel6 SourceProductId CurrencyId
v_Asset CurrencyId
SourceApplication EmployeeId EventId EmployeeId GLId AssetId ExpiryDate CollateralType CurrencyId EventId DAOCodeLevel1 SystemSecurityId CurrencyId2
CurrencyId2
SourceCustomerId EmployeeId2 GlId GLBranchId GLTranBranchId BusinessDate LimitCurrency CollInsuranceCompany SourceApplication CurrencyId2 GLId DAOCodeLevel2 ThirdPartyId CurrencyId3
CustomerId
AddressId EmployeeId3 OrganisationId GLId GLTranId CustomerId LimitProdCode CollInsuranceExpiryDate CustomerId OrderId DAOCodeLevel3 ThirdPartyId2 CustomerId
CustomerId2
AddressId2 EmployeeId4 PortfolioId OrganisationId OrganisationId LimitProdDesc CollInsurancePolicyNum CustomerId2 PortfolioId DAOCodeLevel4 ThirdPartyId3 EmployeeId
EmployeeId
AddressId3 EmployeeId5 ProductId SourceApplication PortfolioId UtilisedAmount CollInsuranceType EmployeeId ProductId DAOCodeLevel5 AssetsHierarchylevel1 EmployeeId2
EmployeeId2
AddressId4 GLId SecurityPositionId SourceGLId ProductId LimitAmount CollPropertyCode EmployeeId2 SecurityPositionId DAOCodeLevel6 AssetsHierarchylevel2 EmployeeId3
GLId
Age IsActive SourceAcctTranId GLAssetType SourceApplication InternalAmount SystemSecurityId EmployeeId3 SourceApplication AssetsHierarchylevel3 GlId
v_Card OrderId
AgeGroup LimitId SourceApplication GLBranchName SourceGlTranId SystemSecurityId CollStartDate EventId SourceOrderId AssetsHierarchylevel4 LimitId
v_ThirdParty PortfolioId
AnnualIncomeGroup LinkedLoanAcctID TranEmplId GLBranchNum AcctTranId BusinessDate LimitRef CollExpiryDate GLId SystemSecurityId AssetsHierarchylevel5 PortfolioId
ProductId
BirthDate OrganisationId TransactionTime GLBSAttribute1 CurrentDate CardId IsRevolving CollCurrency LimitId ThirdPartyId AssetType ProductId
SourceApplication SecurityPositionId
CreditScore PortfolioID BusinessDate GLBSAttribute10 GLTranAmount CustomerId IsAvailable CollStatus LinkedLoanAcctID ThirdPartyId2 AssetTypeCode SecurityPositionId
SourceThirdPartyId SourceApplication v_Organisation
CreditScoreCompany ProductId DebitOrCredit GLBSAttribute2 GLTranBranchName SourceApplication IsSecured MaximumValue OrderId ThirdPartyId3 IssueDate SourceApplication
ThirdPartyId SourceBrokerTradeId
CreditScoreDate ProductId2 EffectiveDate GLBSAttribute3 GLTranBranchNum SourceCardId ReviewFrequency ExecutionValue PortfolioId LeadCompany OrganisationId Product SourceSecurityPositionId
ThirdPartyName SystemSecurityId
CreditScoreGroup SourceAccountId ForeignExchangeRate GLBSAttribute4 GLTranEffectiveDate ATMOffline LimitFixOrVar ApplicationRef PortfolioId2 OrderBuyingPower SourceOrganisationId ProductCategory SystemSecurityId
DefaultPhone ThirdPartyId
CustAddressLine1 SourceApplication InterestAmount GLBSAttribute5 GLTranEmplFullName ATMOnLine MaxLimitAmount CollAmount ProductId OrderCode OrganisationDescLevel1 ProductCategoryCode ThirdPartyId
LeadCompany ThirdPartyId2
CustAddressLine2 WMId PrincipalAmount GLBSAttribute6 GLTranEmplId CardNumber ProductId2 OrderDate OrganisationDescLevel2 RiskCountry ThirdPartyId2
Residence ThirdPartyId3
CustAddressLine3 AccountClass SourceAccountId GLBSAttribute7 GLTranEmplNum CardStatus SecurityPositionId OrderDealStatus OrganisationDescLevel3 RiskCountryCode ThirdPartyId3
SourceAddressId SourceThirdPartyId
CustBranchName AccountNum SysTranChannel GLBSAttribute8 GLTranEmplRegion CardType SecurityPositionId2 OrderGroupcode OrganisationDescLevel4 RiskCurrency WMId
SourceBranchId SourceThirdPartyId2
CustBranchNum AcctBranchId SysTranType GLBSAttribute9 GLTranCurrency ExpiryDate v_LimitCollateral v_CustomerCollateral SourceActivityId OrderIsSTP OrganisationDescLevel5 RiskCurrencyCode SourceSystem
SourceSystem SourceThirdPartyId3
CustClosedDate AcctBranchName TraceNumber GLCurrency GLTranForeignCurrencyAmount IssueDate SourceApplication OrderLimitExpDate OrganisationDescLevel6 RiskLevel Partition
BusinessDate BusinessDate ThirdPartyIndustry
CustClosedThisMonth AcctBranchNum TranAmount GLDescription GLTranRegion POSOffline SystemSecurityId OrderLimitPrice OrganisationCodeLevel1 SourceBranchID
CollateralId CollateralId ThirdPartyNum
CustClosedToday AcctEmpl2BranchId TranAmountGroup GLInsightAttribute1 GLTransactionDate POSOnline ThirdPartyId OrderLimitType OrganisationCodeLevel2 SourceSystem
LimitId CustomerId ThirdPartySector
CustEmplBranchId AcctEmpl2BranchName TranBranchId GLInsightAttribute10 GLTransactionTime ThirdPartyId2 OrderMarketType OrganisationCodeLevel3 SubAssetType
LimitPrctAlloc ThirdPartyStatus
CustEmplBranchName AcctEmpl2BranchNum TranBranchName GLInsightAttribute2 SystemSecurityId ThirdPartyId3 OrderMaturityDate OrganisationCodeLevel4 SubAssetTypeCode
ThirdPartyTarget
CustEmplBranchNum AcctEmpl2FullName TranBranchNum GLInsightAttribute3 EBSystem ActivityHierarchylevel1 OrderNominal OrganisationCodeLevel5 Partition
ThirdPartyType v_Portfolio
CustEmplFullName AcctEmpl2Num TranChannel GLInsightAttribute4 EBSystemCode ActivityHierarchylevel2 OrderTime OrganisationCodeLevel6
CustEmplNetworkUserName AcctEmpl2Region TranCurrency GLInsightAttribute5 PLCateg v_AccountCollateral BusinessDate
v_AccountJoint ActivityHierarchylevel3 OrderTransCode
CustEmplNum AcctEmplBranchId TranDescription GLInsightAttribute6 PLCategCode CustomerId ActivityHierarchylevel4 OrderTransDesc
TranEmplBranchId GLInsightAttribute7 ProdCateg AccountId
CustEmplPosition AcctEmplBranchName AccountId PortfolioId ActivityHierarchylevel5 OrderType
TranEmplBranchName GLInsightAttribute8 ProdCategCode BusinessDate
CustEmplRegion AcctEmplBranchNum BusinessDate SourceApplication ActivityType OrderValueDate
TranEmplBranchNum GLInsightAttribute9 ReversalFlag CollateralId
CustMonthlyCosts AcctEmplFullName CustomerId SourcePortfolioId ActivityTypeCode SourceAccountId v_Employee
TranEmplFullName GLNum SourceAccountId CollPrctAlloc
CustMonthlyFeeIncome AcctEmplId JointType SystemSecurityId SourceAccountId SourceBranchId
CustMonthlyNetIncome AcctEmplId2 TranEmplNum GLRegion SourceAcctTranId IsSigner PortfolioType SourceCustomerID SourceCustomerId BranchId
CustMonthlySpreadIncome AcctEmplId3 TranEmplPosition GLSourceSystem SourceEventID SourceEmployeeId SourceEmployeeId EmployeeId
CustNewThisMonth AcctEmplId4 TranEmplRegion GLThirdPartyAttribute1 SourceGLId SourceEmployeeId2 SourceEventId SourceApplication
CustNewToday AcctEmplId5 TranForeignCurrencyAmount GLThirdPartyAttribute10 SourceProductId SourceEmployeeId3 SourceProductId SourceEmployeeId
CustomerClass AcctEmplNum TranRegion GLThirdPartyAttribute2 Company SourceEventId SourceSecurityPositionId BranchName
CustomerIndustry AcctEmplRegion TransactionCode GLThirdPartyAttribute3 SourceSystem SourceSystem EmplFullName
CustomerNum AcctRegion TransactionDate GLThirdPartyAttribute4 SourceThirdPartyId SourceThirdPartyId EmplLastName
CustomerSector AddressId TransactionNarrative GLThirdPartyAttribute5 SourceThirdPartyId2 SourceThirdPartyId2 EmployeePosition
CustomerStatus AddressId2 TransactionType GLThirdPartyAttribute6 SourceThirdPartyId3 SourceThirdPartyId3 EmployeeRegion
CustomerTarget AmortMatureDate TranInitiation GLThirdPartyAttribute7 SystemSource EmployeeStatus
CustomerType Authorized IsCustTran GLThirdPartyAttribute8 EmployeeType
CustProfitGroup AvailableFunds TranServiceCharge GLThirdPartyAttribute9 EmplStartDate
CustProfitStatus Balance SystemSecurityId PLCategory NetworkUserName
CustRegion BalanceGroup EBSystem SystemSecurityId ParentLevel1
CustSourceSystem BrokerName EBSystemCode GLUnAdjmtAmt ParentLevel1Code
CustStartDate Category FlowSubType GLUnAdjmtForeignAmt ParentLevel2
DeceasedDate Classification FlowType GLPrevAssetType ParentLevel2Code
DefaultPhone ClosedDate SourceProductId PLConsolKey ParentLevel3
DepsBalance Currency Company PLResidence ParentLevel3Code
DepsBalGroup DelinquentAmount PLSector ParentLevel4
FirstName DelinquentDays PlTerm ParentLevel4Code
FullName DelqDayGroup Company ParentLevel5
Gender DisburseDate GLActualAmount ParentLevel5Code
HasNonClassified ExternalRiskCode GLAdjmtAmount
v_AcctTranSub
HasProduct1 ExternalRiskDesc GLAdjmtForeignAmt
HasProduct10 FeePlan AcctTranId GLAmount
HasProduct2 FixorVar AcctTranSubId GLBudgetAmount
HasProduct3 ForeignCurrencyBal SourceAcctTranID GLForeignAmount
HasProduct4 FTPStartDate SourceAcctTranSubID
HasProduct5 Group_
HasProduct6 HoldsTotal
HasProduct7 InterestAccrued
HasProduct8 InterestPaidFreq
HasProduct9 InterestRate
HasService1 InterestRateGroup
HasService2 InterestRateIndex
HasService3 InterestRateVariance
HasService4 IntIncomeOrExpense
HasService5 IsClosedThisMonth v_Branch
IsEmployee IsClosedToday SourceApplication
LastName IsNewThisMonth Active
LoanAuthorized IsNewToday BranchId
LoanBalance IsOverdrawn BranchName
LoanBalGroup IsSold LeadCompany
MiddleName LastPaymentDate BranchNum
NationalIdentityNum LoanCode Region
NonClassifiedBal LoanDescription SourceBranchId
NonResident LoanToValue Company
NumAccounts MaturityDate
NumNonClassified MonthlyAmortizedSalesNetIncome
NumProdsAndServices MonthlyAvgBal
NumProduct1 MonthlyCosts
NumProduct10 MonthlyFeeIncome
v_OpportunitySuccess v_Opportunity v_Profile
NumProduct2 MonthlyNetIncome
NumProduct3 MonthlySpreadIncome v_Program
AccountId BusinessDate ProfileId
NumProduct4 NextPmtDueDate BusinessDate CustomerId SourceProfileId v_DataDictionary v_MetaData
SourceProgramId
NumProduct5 OriginalLoanAmount CampaignId OpportunityId v_SysSecVisibleRecords v_Time v_date
TableName ObjectType
NumProduct6 OriginalStartDate ChannelId SourceOpportunityId
SystemSecurityId AmPm BSEndOfMonth ColumnName TableName
NumProduct7 OverdrawnAmount CustomerId SystemSecurityId
Hour BusinessDate SourceSystem ColumnName
NumProduct8 PmtFreq OpportunityId
HourAmPm CurrentDate SourceTable DataType
NumProduct9 PriCollCode OpportunitySuccessId v_SDB Minutes CurrentDateName SourceColumn Length_
NumProducts PriCollDesc SourceOpportunitySuccessId v_SystemSecurity MinutesAmPm DateFormat Transformations PrimaryKey
NumProductsGroup ProductCode SystemSecurityId BusinessDate
v_OpportunityProfile Time_ Day_ Alias ForeignKey
NumService3 ProductDesc v_Campaign CustomerId SystemSecurityId
TimeFormat EndOfMonth SCDType Nullable
NumService4 T24ProductGroup SDBBranchId SourceSystemSecurityId
BusinessDate CampaignId TimeOfDay EndOfMonthName SchemaName Identity_
NumService5 PurposeCode SDBId BranchRole
OpportunityId SourceCampaignId TimeOfDayOrder FinancialMonth Computed
NumServices PurposeDesc SourceSDBId AccountBranchRole
ProfileId TimeRange15Min FinancialQuarter precision
Product10Balance ReasonClosed CustSensitivityRole
v_Event FinancialWeek scale
Product1Balance ReviewDate DomainName
FinancialYear ColumnOrder
Product2Balance RiskCode BranchId HasGoodData ComputeColumnDefinition
Product3Balance RiskDescription BusinessDate
Product4Balance ScheduledPmtAmt Month_
CurrencyId v_CampaignChannel MonthName
Product5Balance SoldPoolNum CurrencyId2
Product6Balance SourceSystem BusinessDate MonthNameYear
EventId Quarter
Product7Balance SpreadRate ProductId CampaignId
Product8Balance StandardizedTerm ChannelId ShowData
SourceApplication Weekday
Product9Balance StartDate SourceEventId
Religion StatementDesc WeekdayOrder
SystemSecurityId Year_
Residence Status_ ThirdPartyId
Tenure Term YearMonth
ThirdPartyId2 YearMonthDayName
TenureGroup TermInMonths SourceSystem
TotalBalance TermToMaturity YearMonthName
TotalBalGroup TermToMaturityGroup
SystemSecurityId TermUnit
Occupation TransferRate
MaritalStatus SystemSecurityId
SourceSystemSecurityId SourceProductId
CustAttritionRisk SourceProductId2
CustLoyaltyGroup ProdGroupCode
CustLoyaltyScore ProductStatus
EmailAddress CatAvailDate
Company CatExpiryDate
AnnualBonus ProdLineCode
AnnualBonusGroup ProdLineDesc
CustProfitSegment ProdGroupType
CustSegment ProductType
Partition ProdGroupDesc
Company
LDCategoryProduct
LocalCurrency
v_Address PresCcyMidRevalRate
PresCcyRevalRate
AddressId PresentationBalance
SourceAddressId PresentationCurrency
SourceApplication Partition
AddressLine1
AddressLine2
AddressLine3
AddressType
City
Country
PostalCode
Province
The abstraction views are what are ultimately exposed to end users of the data warehouse like report
developers and advanced end users with SQL knowledge. No reports should touch the tables directly. The
views hide a lot of the complexity of the dimensional model by joining the dimension and fact tables along
with most of the common role playing dimensions to provide a wide view of a single Insight concept like
Customer or Account.
Additionally, the views contain the joins for the data security layer so if a user is to have restricted roles
based access to data stored in the warehouse then they must access data using the views.
Almost all joins in the view layer should be made with the surrogate key (fields that end in Id without
source in front like AccountId) and the business date in order to properly utilize the indexing built into the
warehouse. There are some cases like querying over time that you will want to utilize the business key in
your query to account for changes in a dimension over time. The business keys are the fields that have a
source at the beginning and Id at the end like SourceAccountId.
Technical Components
Tables
DataDictionary
The DataDictionary table is used to store the table and view definitions for InsightWarehouse. The
definitions stored in this configuration table are used by procedures in the data warehouse to automatically
create the warehouse tables and views.
Records are at the InsightWarehouse column granularity. Each source field being brought into
InsightWarehouse may have several records in the data dictionary depending on how many abstraction
layers the column is being displayed in and whether the column has been defined for more than one
configuration.
The data dictionary has a concept of configuration layers. Typically there are at least two if not more
configuration layers present in the data dictionary definitions. The first layer is the Framework layer which
contains a set of source system-independent columns that are required to build the skeletal structure of
the data model. This set consists of system columns such as primary and foreign keys and dimensional
table columns like the Added, Modified and Active columns. These columns are the minimum requirements
to run Analytics ETL.
The next layer consists of a configuration layer labeled as Core which is used to host data from any core
source system. This configuration layer contains all of the core columns that are provided out-of-the-box
with the Advanced Analytics Platform.
The next layer that can be included is for specific columns required for a particular banking system such as
Temenos Core Banking Model Bank e.g.the Modelbank layer.
Finally, there is a client configuration layer that is used to mark columns that are specific to a particular
bank’s implementation of Insight, i.e. the Local layer.
In addition to these basic configuration layers, more can be added if a client installs any optional module
in the Advanced Analytics Platform or in the Core banking source system. Some examples could be:
Steps
1. Delete existing Combined Configuration records from Data Dictionary
2. For all table columns, determine the data type and size that will properly store data fitting
all definitions.
3. Create a new Combined Configuration record
Inputs
No parameters are required for this stored procedure.
dbo.s_TableStructureFromDD_update
Description
This procedure uses the table definitions in the Data Dictionary Combined Configuration records to update
the InsightWarehouse tables to meet these definitions. It will not make any changes that will result in data
loss. It attempts to perform table alter statements to add columns or update data types and sizes.
Steps
1. Check if table column is new, if so then add it to the table
2. If the column exists, check for a type mismatch.
3. If type mismatch is discovered attempt to automatically update table column to new type
4. If the auto update fails return error stating a manual update may be needed or the definition
is wrong.
Inputs
Page 201 | 335
Advanced Analytics Platform Technical Guide
dbo.s_ViewStructureFromDD_update
Description
This procedure drops all data warehouse views and recreates them based on the view definitions in the
Data Dictionary table. It uses template views as the base for the new view. The template views contain all
of the necessary join logic but contain no columns.
Steps
1. Get list of view templates
2. Get the list of fields for the first template from the Data Dictionary
3. Create the view with all direct columns and calculated columns
4. If create view fails then drop all calculated columns, list those columns in print then create
a view with just direct columns.
5. If direct columns missing from tables then create a view without missing columns and print
column names to output.
Input
No parameters are required for this stored procedure.
dbo.s_DW_DataDelete_Dimension
Description
This is a maintenance procedure used to delete orphaned dimensions from the InsightWarehouse. It is
internally called by s_DW_DataDelete_Range.
dbo.s_DW_DataDelete_Fact
Description
This is a maintenance procedure used to delete fact and bridge table records from the InsightWarehouse.
It is internally called by s_DW_DataDelete_Range.
dbo.s_DW_DataDelete_Range
Description
This is a maintenance procedure used to purge records from the InsightWarehouse. It first purges all fact
and bridge table records with a business date that falls in the date range specified. Then it removes all
orphaned dimension records that are no longer needed after the fact removals.
Steps
1. Determine the first period to be removed from the warehouse excluding month end dates if
the parameter is set as such
2. Exec s_DW_DataDelete_Fact
a. Removes all fact and bridge records in the period
Page 203 | 335
Advanced Analytics Platform Technical Guide
3. Exec s_DW_DataDelete_Dimension
a. Removes all orphaned dimension records
4. Set the Show Data and Has Good Data flags in the Dim Date table to 0 for all removed dates
Inputs
Delete Start Date – Earliest date in the range of business dates to be removed from
InsightWarehouse. Date format is the native format for the SQL server collation being used.
Example ‘2014-01-01’
Delete End Date - Latest date in the range of business dates to be removed from
InsightWarehouse. Date format is the native format for the SQL server collation being used.
Example ‘2014-01-31’
Exclude Month End – Flag to indicate if month end dates should be retained in InsightWarehouse
after the purge. Set to 1 yes to retain month-end dates. Defaulted to 1.
dbo.s_EmptyColumns_drop
Description
This procedure drops all empty non-framework columns from InsightWarehouse. Framework columns are
all columns that have Framework for their Configuration in the Data Dictionary.
Steps
1. Get list of all non-framework columns and check if empty
2. Drop all empty columns
Inputs
This stored procedure does not require any parameters.
dbo.s_BusinessDate_Enable
Description
This stored procedure is executed at the end of the core Analtycs ETL to enable the latest imported business
date in the InsightWarehouse database. As a result, newly imported data associated to this business data
will be available for reporting in the Database engine and in the Analytics Web Front End Application. Once
this stored procedure has run successfully, the HasGoodData flag will be set to 1 (i.e. Yes).
dbo.s_CreateDWHashObjects
Description
This procedure creates the hash calculating columns, indexes and triggers for dimension tables in
InsightWarehouse (but will not affect dimension tables with the same name in InsightStaging). Please note
that the deployment of the CLR function fn_MaxUnicodeSha1HashCLR is a pre-requisite for this stored
procedure to run successfully.
dbo.s_DW_CheckDBIntegrity
Description
This procedure performs a database integrity check in InisghtWarehouse.
dbo.s_DW_IndexMaintenance
Description
This procedure rebuilds and reorganizes indexex in InsightWarehouse, applying the following rule:
dbo.s_DW_UpdateStatistics
Description
This stored procedure updates statistics on all InsightWarehouse tables.
dbo.s_FilterByDimSystemSecurityUser
Description
This stored procedure allows the content of tables and abstraction views in InsightWarehouse to be filtered
based on the row-level security settings defined for each individual user, if Data Access Security is
configured.
Configuration
Configuring Data Dictionary Table
The Data Dictionary table is used to configure the existing objects in the data warehouse. This includes
adding new columns to tables and views, modifying table columns to change data types or increase the
column size and also creating new calculated columns in views.
When adding a new table column you will need to consider if the column belongs in a dimension table or
a fact table. This requires you to know how frequently the data in the column will change over time. The
general rule is that any column that changes once a month or more frequently should be added to the fact
table. This will help control dimension growth and keep it a reasonable level. If you are unsure it is best to
add the column to the fact table.
If you are adding a new slowly changing column to a dimension table then you need to also decide what
type of slowly changing column type you want to use. The Advanced Analytics Platform supports two slowly
changing dimension column types, Type 1 and Type 2.
When a column is set to Type 1 then when the data in the column changes the old value is overwritten in
the existing record. This means that the old value is discarded and not available in history. If you look up
an old record it will refer to the new value. This type is used for columns that contain information that is
almost always updated due to the initial value being incorrect or there is no analytical value to retaining
the old value. Examples are Customer Birth Date or Customer Tax Identification Number. These should
only ever be updated because the original value was incorrect.
Type 2 slowly changing columns when updated will create a new record in the dimension table. This means
that facts prior to the update of the column will refer to the older value while records added after the
update will refer to the new value. This is done when you want to retain the historical value since it makes
analytical sense to have the older facts refer to the original value. An example would be Customer Address
fields.
If you are adding the new column to a role playing dimensions like a branch or employee table then you
may want to add additional records to bring this new column directly into one or more abstraction views
such as v_Account or v_Customer to avoid requiring end users to use an extra join to the role playing view
to utilize the new field.
For the column to be populated it must be populated in InsightStaging using logic in a source view or from
InsightETL. Please refer to the InsightStaging section of this guide for the detailed steps to accomplish this.
The following example will cover adding a new dimension table record to the Branch Dimension table. For
a fact table record, you would not add a value for the SCD Type column.
Steps
1. Add a new table record to the Data Dictionary table as shown in the table below (partial example). Add
the record to a client specific configuration so it’s obvious that it is a local development.
When adding a record to include a role playing dimension you will need to reference the join logic from the
view template. Open a view template to see all of the available join references that exist or review existing
role-playing column entries in the Data Dictionary table. All view templates are in the Template schema in
InsightWarehouse. The syntax for view joins is TableA>TableB>TableN etc.
If multiple JOIN statements are made to the same table in the view then subsequent joins will use a
sequential number in parenthesis such as TableA>TableB(2)>TableN etc. This references the second join
made to TableB. This is used when you need multiple role playing references such as having multiple
account employees or multiple customer employees. Remember when joining to a dimension you are
joining a fact table to a dimension table so the first table in the source table column should always be a
fact table, as shown in the table below (partial example).
In the partial example, shown in the table below, we would already have a record in the data dictionary
for this column and we would be adding an additional table record for the column under a new local
configuration.
Existing Record:
Steps
1. Add new record to the Data Dictionary table
2. Exec s_DDCombinedRecordsUpdate – This will create a new Combined Configuration record for the
table column. It compares all column definitions in all configurations to find the best type and size. In
Page 207 | 335
Advanced Analytics Platform Technical Guide
this case, we’ve added a record with a larger size so it will create a new Combined Configuration record
with the larger size varchar(50) as the data type.
3. Exec s_TableStructureFromDD_update – This will attempt to actually modify the column with a table
alter statement based on the new Combined Configuration definition from the Data Dictionary table. It
will report success or failure in the output message of the stored procedure.
The calculation must be based on a field that exists in the data warehouse either in a table or a view. Also,
the column you are using in the calculation must be included in one of the existing tables in the join logic
for the view template the view is based on.
For example, if I want to create a calculated column in the v_GL abstraction view I would need to view the
template join logic for v_GL. This is located in the InsightWarehouse.Template.v_GL view. Opening this
view will show me the join logic and what tables are available to use. The join logic for this view is shown
below.
from
DimGL left join
FactGL on
DimGL.GLId = FactGL.GLId left join
DimCustomer [factgl>DimCustomer] on
[factgl>DimCustomer].CustomerId = FactGL.CustomerId LEFT JOIN
DimBranch [factgl>DimBranch] on
FactGL.BranchId = [factgl>DimBranch].BranchId LEFT JOIN
DimBranch [factgl>DimBranch(2)] on
FactGL.BranchId2 = [factgl>DimBranch(2)].BranchId LEFT JOIN
DimEmployee [factgl>DimEmployee] on
FactGL.EmployeeId = [factgl>DimEmployee].EmployeeId LEFT JOIN
DimAccount [factgl>DimAccount] on
FactGL.AccountId = [factgl>DimAccount].AccountId
Here we can see the database references we will use for our calculation. We can reference DimGl or FactGL
directly but if we want to create a calculation using one of the fields from the second join to DimBranch,
for example, we would need to use [FactGl>DimBranch(2)].Columnname. For the majority of
calculated columns, you will be using the fact or dimension table from the object (ie Account or Customer)
and you can just reference this directly without needing to open the view template.
For the example, we will create a new column in the Customer abstraction view called CustIsRich which
will divide the customers into Rich and Poor based on the deposit balances. While this is better handled in
InsightETL it will serve as an example here to keep it simple.
When creating the calculation we will add the calculation like it is being used like it is being included in the
select statement for the view. We do not need to include the commas, Select or From arguments or column
aliases. The alias for the calculated column is the value in the Column Name column.
Steps
1. Determine which fields are required for the calculation. For our example, we will use
FactCustomer.DepsBalance Since this is in the fact table that v_Customer uses we can reference the
column directly.
2. Create a new view record in the Data Dictionary
Update Stats on SW
Sp_updatestats is a standard T-SQL stored procedure that runs UPDATE STATISTICS against all user-
defined and internal tables in InsightWarehouse. This stored procedure is executed within the Update Stats
on SW step of Analytics ETL.
Maintenance Procedures
The store procedure InsightETL.dbo.s_ColumnStoreIndex_Defragmentation is executed right after data has
been loaded into InsightWarehouse Dimension and Fact columnstore index tables. It is executed as follows:
EXEC [dbo].[s_ColumnStoreIndex_Defragmentation]
@DatabaseName = N'InsightWarehouse',
@CompressRowGroupsWhenGT = 5000,
@MergeRowGroupsWhenLT = 150000,
@DeletedTotalRowPercentage = 10,
@DeletedSegmentsRowPercentage = 20,
@EmptySegmentsAllowed = 0,
@ExecOrPrint = N'EXEC',
@BatchNum = null
InsightSystem
Overview
InsightSystem is exclusively used to store configuration tables for the Analytics Web Front End Application.
These configuration tables should be directly updated through the web application – please refer to the
Analytics Application Functional User Guide for more details about this. Furthermore, this database is not
involved in the ETL processing or in any other data processing or maintenance operation.
Budget Data
Overview
The advanced Analytics Platform has a significant amount of functionality and content for budget reporting
but it is important to highlight that Analytics is not a budgeting or forecasting tool. It has the flexibility to
handle the import and reporting of multiple budgets.
Budget data is imported in the Fact GL table and is stored as a separate row in a column. This is how
Analytics supports multiple budgets without the need to create new columns or new cube attributes. All
amounts are stored in the GLAmount column in Fact GL.
You can differentiate budget data from banking system data by the GLSourceSystem column. This will have
a value of BS for all banking system values and Budget or something similar for all budget values. The GL
abstraction view has logic to create aGLActualAMount and GLBudgetAmount columns to split the actual
and budget amounts column wise. These columns are used in the out of the box budget dashboards and
budget reports.
The standard process for importing budgets into Analytics is to enter budget data into an Excel spreadsheet
and use the data import functionality of SQL Server to add this data to a table in the Budget database.
Budget data is treated as a separate source system in Analytics.
Budgets can be more granular than the GL Line level, for instance, a bank can budget for each GL line per
branch. Essentially any additional attributes already defined in the General Ledger view in
InsightWarehouse can be used to slice the budget data and support more granular budgets. Keep in mind
that even though this is possible, entering and maintaining budgets at a very fine level of granularity can
become quite onerous.
Even if not budgeting by branch it is required to assign all budget amounts to at least one default branch,
typically the admin branch. The minimum fields that are required for budget data are shown in a standard
budget table below. There would be an amount column for each month but in the example, we only show
from Jan to Mar.
The Budget database should have one table added manually titled SourceDate. This should have one
column titled BudgetBusinessDate. Add one record with the current budget date being imported, for
example, 2014-01-01 for the 2014 budget.
2. Use SQL Server Import and Export Wizard to import data into Budget database
2.1. Set data source to Excel spreadsheet and point it to the Excel sheet picking the latest version of
Excel in the list if the installed version is not present. Indicate if the first row has column names.
2.2. Chose the Budget database as the destination
2.3. Chose the copy data from one or more tables or views option
2.4. Select the sheet that has the budget data. Rename the destination table as GLBudget or something
else descriptive. Click the preview button to make sure the data looks correct.
2.5. Select run immediately and Finish.
2.6. The package should run reporting success and the number of rows imported. Check to see if the
row count is correct. If package reports errors check permissions to Budget database.
3. Create new Extract List Record
Create a record in the InsightLanding Extract List table for the new Budget table. This should have its own
source name like Budget to indicate that this is data coming from a system other than the core banking
system
Create a record in the InsightLanding Extract Source Date table for the budget data. This record needs to
include a query that returns the source date for the budget data. For our example we will use the query:
Select @bsdate = max(budgetdate) from Budget.dbo.GLBudget.
5. Exec s_InsightLanding_Update(‘Budget’,’2014-01-01’,’’dbo’,’No’)
Run the Landing update procedure to bring the budget data into Landing. If everything was configured
correctly for our example we would see two new tables 20140101Budget.GLBudget and
20140101Budget.SourceDate in InsightLanding.
6. Exec s_InsightSoure_Update(‘Budget’,’2014-01-01’,’No’)
Run this procedure in InsightSource to bring in the budget data so it is ready to be used in InsightStaging
for the next step.
select
select GlLineNum as Line_No, Jan, Feb, Mar, Apr, May, June, July, Aug, Sept,
Oct, Nov, [Dec],GLBranchMnemonic
from [$(InsightSource)].Budget.GLBudget)p
unpivot
(
GLAmount for Month_ in (Jan, Feb, Mar, Apr, May, June, July, Aug, Sept, Oct,
Nov, [Dec],GLBranchMnemonic)
) as unpvt
) b join
v_sourceDateBudget d on
b.MonthNumber = datepart(month, d.BusinessDate)
select
1 as sourceDateID, -- Do not use. for extract step only
max(sd.BusinessDate) as BusinessDate ,
MAX (bd.BudgetBusinessDate) as BudgetBusinessDate
from
[$(InsightMasterData)]..CurrentDate cd join
[$(InsightMasterData)].dbo.SourceDate sd on
cd.BusinessDate = sd.BusinessDate and
sd.SourceSystem = 'Budget' left join
[$(InsightSource)].Budget.sourceDate bd on
bd.BudgetBusinessDate = sd.SourceDate
Select data from the source views and ensure that the correct number of rows are being returned.
Remember the budget data has been pivoted and since the view is joining the budget date view then it will
return only one month of data. There must be a record in InsightMasterData..SourceDate for the Budget
source system for the business date being processed (date in InsightMasterData..CurrentDate) for the
budget source view to return data.
Ensure that the InsightStaging Update Order table has records for the budget source system. If not you
will have to create two records for the extract and load steps for DimGL.
Check the Systems table and make sure there is a record for the Budget system and it is set to enabled.
4. Exec s_InsightStaging_Update(1,1)
Run the InsightStaging update procedure for a business date that should have budget data. This will load
the budget data into InsightWarehouse for final testing.
Select Distinct(GLSourceSystem) from v_GL where BusinessDate = {Business date processed in previous
step}
You should now see a source system that is defined in the budget source view and for the default
configuration that would be Budget. Finally, you can select all records from v_GL where the GL Source
System is equal to Budget to confirm that all budget records have been imported successfully.
The budget amount should be in the GLAmount column in v_GL and also in the GLBudgetAmount column.
Technical Details
Technical Components
InsightETL
dbo.s_MergeUpdateBatchStatus
This stored procedure, executed at the beginning of Analytics ETL, creates a new batch in the Batch table
and will perform in-place updates to the involved Batch records.
dbo.s_CreateColumnCalculations
This stored procedure reads the content of the AttributeCalculations table in InsightETL and, based on the
Split, Calculations and Datasets definitions, it creates new columns in the target tables. In Analytics ETL,
this stored procedure is directly executed twice, first on the tables imported in the InsightImport database
and then on the tables imported into InsightSource. Furthermore, this stored procedure is internally called
by the s_InsightStaging_Update stored procedure when Analytics ETL reaches its core steps in the
InsightStaging database.
dbo.s_CreateRuleGroup
This stored procedure applies business rules designed through the Rules Engine functionalities of
InsightETL on a specific database whose name is specified as input parameter. In Analytics ETL, this stored
procedure is directly executed three times, first on the tables imported in the InsightImport database, then
on the tables loaded in InsightImport and then on the tables imported into InsightSource. Furthermore,
this stored procedure is internally called by the s_InsightStaging_Update stored procedure when Analytics
ETL reaches its core steps in the InsightStaging database.
dbo.s_PopulateAuditCounts
This stored procedure performs a count of the records processed within each database involved in ETL
processing. It is executed several times during ETL for records count reconciliation between source and
target databased. The results of these calculations are stored in the dbo.TableRowCountAudit log table in
InsightETL.
dbo.s_ColumnStoreIndex_Defragmentation
Forces all of the rowgroups into the columnstore and then to combine the rowgroups into fewer rowgroups
with more rows. The ALTER INDEX REORGANIZE online operation also removes rows that have been
marked as deleted from the columnstore index tables. During Analytics ETL, this stored procedure is
executed twice for maintence reasons, first within the InsightLanding database and then in the
InsightWarehouse.
Example
EXEC [dbo].[s_ColumnStoreIndex_Defragmentation]
@DatabaseName = N'InsightLanding',
Page 217 | 335
Advanced Analytics Platform Technical Guide
@CompressRowGroupsWhenGT = 5000,
@MergeRowGroupsWhenLT = 150000,
@DeletedTotalRowPercentage = 10,
@DeletedSegmentsRowPercentage = 20,
@EmptySegmentsAllowed = 0,
@ExecOrPrint = N'EXEC',
@BatchNum = null
dbo.s_SetBatchStatusFinished
This stored procedure marks the completion of any batch executed, either ‘CompletedWithError’ if there
are error(s) or ‘CompletedSuccessfully’ when no error is encountered. It executed at the end of the Analytics
ETL agent job.
Furthrmore, InsightETL is used to set the current date through a SQL statement similar to the one below.
Example
InsightImport
Insight.s_ImportBaseTables
This procedure manages the control of the process to Bulk Insert from CSV files to SQL in parallel with DQI
(Data Quality Import) or/and DPI (Data Profiler Import).
Insight.s_ImportSubTables
This procedure parses multi-values, sub-values, and local reference fields creating the associated ‘sub’
tables.
Insight.s_T24ConsolKeys_add
This procedure parses Consolidation keys for GL-related tables like RE_CONSOL_SPEC_ENTRY and any CRF
file.
. Steps
1) Updates Batch log
EXEC [InsightETL].dbo.s_MergeUpdateBatchStatus null, null, null
2) Runs the Import.
Exec [Insight].[s_Import_Control] @PathName = 'E:\InsightImport', @TableName = 'All',
@ReCreateTables = 0, @TableType = 'Regular', @SystemTablesExist = 0, @BatchNum = null,
@TotalThreads = null;
3) Checks imported tables against HASH_TOTAL
Exec [InsightImport].[Insight].[s_ImportDataReportErrors] @TableName = 'All', @BatchNum =
null;
4) Performs Local reference fields parsing
Page 218 | 335
Advanced Analytics Platform Technical Guide
Insight.s_BuildCOA
This procedure builds the Chart of Accounts table.
InsightLanding
dbo.s_InsightLandingTable_CSI_Table_Update
Loads data into InsightLanding’s columnstore index tables.
Example
InsightSource
dbo.s_InsightSource_CSI_Update
Loads data from InsightLanding columnstore index tables into the InsightSource database.
Example
InsightStaging
dbo.s_InsightStaging_Update (Extract)
Extracts data into Staging Tables
Example
dbo.s_InsightStaging_Update (Transform)
Transforms data into Dim and Facts and Loads InsightWarehouse
Example
InsightWarehouse
sp_updatestats
Updates Analytics ETL statistics.
dbo.s_BusinessDate_Enable
Enables the business date.
Example
EXEC
[InsightLanding].dbo.s_InsightLanding_CSI_Table_Update
@SourceName = 'BS', @BSDate = @CurrentETLDate,
@UserId = 'dbo', @CreateRowBaseIndex = 0,
@DataCompression = 'COLUMNSTORE',
EXEC [InsightSource].dbo.s_InsightSource_CSI_Update
@sources = 'BS', @BSDate = @CurrentETLDate,
@BatchNum = null, @TotalThreads = null;
Insight Attributes Calculations-Source [USE InsightETL]
exec s_CreateColumnCalculations
'InsightImport','All','dbo','All',1
exec s_CreateColumnCalculations
'InsightImport','All','dbo','All',2
exec s_CreateIndexes 'All','InsightSource'
Create InsightSource Rules declare @CurrentETLDate date =
IF @KPICacheRefresh = 'Enabled'
EXEC msdb.dbo.sp_start_job @job_name = 'KPI Cache
Maintenance'
In case the Current ETL Batch ended with EXEC InsightETL.dbo.s_SetBatchStatusFinished;
Error, Report and Exit
Example
@CommandText Nvarchar(200),
@CubeProcessType Nvarchar(50)
'NoProcess' Else
[Value] End
FROM [InsightWarehouse].[dbo].[DimSystemParameters]
'NoProcess' Else
[Value] End
FROM [InsightWarehouse].[dbo].[DimSystemParameters]
--Print @CubeProcessType
--Print @BusinessDate
--Testing
----------------------------------------
If @CubeProcessType = 'ProcessByPartition'
BEGIN
END
-----------------------------------
If @CubeProcessType = 'ProcessFull'
BEGIN
EXEC msdb..sp_start_job @job_name= 'Process Insight Cubes - By Partition', @step_name = 'Full Process'
END
-----------------------------------
If @CubeProcessType = 'NoProcess'
BEGIN
END
Scheduling Jobs
Agent jobs can be scheduled.
In SQL Server Management Studio, right click on Jobs under SQL Server Agent, select “View History” from
the right click menu.
EXEC
[InsightLanding].dbo.s_InsightLanding_CSI_Table_Update
@SourceName = 'BS', @BSDate = @CurrentETLDate,
@UserId = 'dbo', @CreateRowBaseIndex = 0,
@DataCompression = 'COLUMNSTORE',
@UpdateTableSchema = 1, @BatchNum = null,
@TotalThreads = null;
InsightLanding CSI Defragmentation EXEC
[InsightETL].[dbo].[s_ColumnStoreIndex_Defragmentation]
@DatabaseName = N'InsightLanding',
@CompressRowGroupsWhenGT = 5000,
@MergeRowGroupsWhenLT = 150000,
@DeletedTotalRowPercentage = 10,
@DeletedSegmentsRowPercentage = 20,
@EmptySegmentsAllowed = 0,
@ExecOrPrint = N'EXEC',
@BatchNum = null
@ExecutionPhase = N'all',
@ExecutionStep = 0, --all steps
@BusinessDate = @CurrentETLDate,
@IsPersisted = 2,--all
@RuleDefinitionId = null,
@BatchNum = NULL,
@StagingEventLogId = null;
----------------------
Set Completion of the Current ETL Batch EXEC InsightETL.dbo.s_SetBatchStatusFinished;
In case the Current ETL Batch ended with EXEC InsightETL.dbo.s_SetBatchStatusFinished;
Error, Report and Exit
DW Online Processing
Overview
The DW Online Processing was introduced in R19 to execute online processing. This job has a recurring
schedule with a frequency interval of every five minutes. This parameter is configurable in
SystemParametersLanding rule.
Steps
This job consists of the following steps.
Steps
1. This step runs the InsightSystem..s_GetKPIAlerts stored procedure. This stored procedure check
for active Report Subscriptions in the InsightSystem..KPIDefinitions table and populates the
KPINoneAlerts table with the list of KPI ids to be processed. The latter table is then used by a view
that drives Step 2
2. The second step runs the Power Shell script that generates the report files and saves them in the
designated folder that was configured in the System Settings screen of the Analytics Web Front
End.
3. The third step updates the ApplicationLogs table with a confirmation that the required files were
created successfully
Value = 84
Type = LandingHistoryCSI
Name = Retention Period in Months
Steps
This job includes only one step that executes the s_InsightLanding_CSI_Purge stored procedure.
Steps
This job consists of the following steps.
Steps
This job consists of the following steps.
InsightETL Maintenance
Overview
This agent job purges the content of the Log tables within the InsightETL database (i.e. EventLog,
EventLogDetails, StagingEventLog and StagingEventLogDetails) within a certain date range.
Steps
This job only includes one job (i.e. Purge Old Data) that purges InsightETL logs by executing the
s_InsightETL_RangePurge stored procedure. The date range to purge is determined through a parameter
stored in the InsightETL.dbo.v_SystemParameters view.
Steps
This job consists of the following steps.
Expire KPI Cache: uses T-SQL code to set the Expiry parameter of the KPI Cache to the current
date
Recalculate KPI Cache: uses T-SQL code to recalculate the value of KPI Cache
The approach used was to minimize the changes required in the InsightStaging source view mapping layer
so the mapping is robust enough to support the majority of multi-company configurations in Core banking.
The design approach is covered in detail so technical users may modify or extend the multi-company
components when they find the mapping does not suit their particular multi-company configuration.
All above Multiple Extract Multiple extracts All above Only Single N
Extract is
currently
supported
INT - Installation There is only one copy of installation level tables in a Core Banking database so the
actual table name does not include a company mnemonic. Examples of INT level tables
CUS - Customer The CUSTOMER and related tables, often include the customer number in the key.
Examples include CUSTOMER.DEFAULT and LIMIT. This type of table can be shared
between lead companies.
CST - Customer table Parameter tables related to customer examples include INDUSTRY and SECTOR,
together with parameter tables for limits, collateral and position management. This type
of table can be shared between lead companies.
FIN – Financial There will always be one copy of a financial table per Lead Company. All branches linked
to a Lead Company will share the same financial table. Lead Company’s do not share
FIN tables. Examples include ACCOUNT, FUNDS.TRANSFER and TELLER
FRP - Financial Reporting There will be one copy of a financial table for every company in the system, examples
include the COB job list tables and REPGEN work files like RGP.TRANS.JOURNAL2
FTD - Financial Table Financial parameter tables that do not contain data with amounts or rates linked to a
Descriptive particular local currency. Examples include the AA.PRODUCT type tables and
BASIC.INTEREST. This type of table can be shared between lead companies.
FTD tables can be shared between Companies, the Id of the owner Company is specified
in the DEFAULT.FINAN.COM Field
If a Company owns only a few FTD type files, they are specified in SPCL.FIN.FILE to
SPCL.FIN.MNE Fields.
FTF - Financial Table Financial parameter tables that contain financial data, often contain local currency
Financial amounts.
Examples include GROUP.DEBIT.INT, GENERAL.CHARGE and TAX.
This type of table can be shared between lead companies.
CCY - Currency Tables containing currency-related information examples include CURRENCY and
PERIODIC.INTEREST. This type of table can be shared between lead companies.
NOS - Nostro Tables related to NOSTRO accounts. Examples include AGENCY and
NOSTRO.ACCOUNT. This type of table can be shared between lead companies.
An example of a required join is a join from the Account (FIN) table to the Customer (CUS) table. We will
also need to create a new column defined as follows
Clearly, it is not possible to join the Account table to the Customer table without figuring out some important
pieces of information
The company check table will help us understand the relationship between different companies in our
installation and what the MASTER company is. As we can see, in this table, the company marked as MASTER
(i.e. first lead company) is LU0010001. LU0019003 is a branch dependent on LU0010001 and CH0010002
is another lead company. We know that CH0010002 is a lead company because it has its own set of financial
files. As we can see from the company code and company mnemonic assigned to the entry with @ID =
CUSTOMER, the customer files are shared and using the master company’s mnemonic and company code.
This information is also confirmed by the company table – the company with mnemonic BCH has, in fact,
its own FINANCIAL_MNE but shares the CUSTOMER_MNEMONIC with BNK.
FROM [InsightSource].[BS].[ACCOUNT] a
LEFT JOIN [InsightSource].[BS].[COMPANY] COM ON COM.MNEMONIC =
A.BRANCH_CO_MNE
LEFT JOIN [InsightSource].[BS].[COMPANY_CHECK_Company_Mne] CO_CHK ON
CO_CHK.[@ID] = 'MASTER' and CO_CHK.[Sequence] = 1
LEFT JOIN [InsightSource].[BS].[CUSTOMER] c ON COM.CUSTOMER_MNEMONIC =
c.LEAD_CO_MNE and a.CUSTOMER = c.[@ID]
Resulting in:
JOIN features
The general procedure to determine what the join should be includes the following steps:
1. Determine the type of table of the core (the first table referenced in the From Clause of a
v_source view) table. In the case of v_sourceAccountBSAA_Accounts it is AA_ARRANGEMENT, by
referring to the FILE_CONTROL table we determine that AA_ARRANGEMENT is a FIN table.
2. Then by referring to Table 1 and the column [Used to Join Table Type], we look for FIN and find
that FIN relates to Financial_mne.
3. Therefore the join from the Company table to the Joined to (ACCOUNT) table would be ON
COM.FINANCIAL_MNE = A.LEAD_CO_MNE and AA. LINKED_APPL_ID = A.[@ID]
4. Use the Company_check table to retrieve information about the MASTER
company
5. The first part of the join is usually the same.
6. Select * FROM [InsightSource].[BS].[AA_ARRANGEMENT] aa
LEFT JOIN
[InsightSource].[BS].[COMPANY] COM ON COM.MNEMONIC = AA.BRANCH_CO_MNE,
the first table always links on BRANCH_CO_MNE.
7. The Complete join would be
Select *
Page 242 | 335
Advanced Analytics Platform Technical Guide
FROM [InsightSource].[BS].[AA_ARRANGEMENT] aa
LEFT JOIN
[InsightSource].[BS].[COMPANY] COM ON COM.MNEMONIC = AA.BRANCH_CO_MNE
LEFT JOIN
[InsightSource].[BS].[ACCOUNT] a ON a.LEAD_CO_MNE = COM.FINANCIAL_MNE
AND A.[@ID] = aa.LINKED_APPL_ID The other table always links on
LEAD_CO_MNE.
8. Any subsequent joins can (generally) we joined on the Company table
which has already been joined to the first table in the v_source view.
Customer_company
Customer_mnemonic CUS
Currency_company
Currency_mnemonic CUR
DEFAULT_FINAN_MNE INT
SPCL_FIN_MNE
DEFAULT_CUST_MNE
SPCL_CUST_MNE
Finan_Finan_Mne FTF
V_Source views
V_SourceCurrencyBS
V_SourceBranchBS
Fields in this view are used to populate DimBranch as well as to help set the SourceCurrencyID by executing
Dataset update statements defined in InsightETL.dbo.RuleDefinitions. See Master Data below.
InsightETL
Dataset business rules defined in InsightETL are used to set the SourceCurrencyID of the fact tables for
which the conversion to Presentation currency is done.
FactGL
FactAccount
FactGLTran
FactAcctTran
For example:
Update a
SET
isnull(S.Value,1)
From
stagingGL a
on a.sourcebranchid = ba.sourcebranchid
--The derived table below gets the Correct currency rate for the record, either the straight rate (code 1) or
the Weighted Average Rate (eg. Code 99).
on S.Name = a.GLInsightAttribute1
Data Model
FactAccount
FactGL
FactAcctTran
FactGLTran
This allows these tables to link to the DimCurrency to get the applicable Currency Cross code eg. USDGBP
to convert from the Lead Company’s currency to the Master Company’s currency.
The DimCurrency table is then linked to the FactCurrency table to get the applicable rate.
FactGLTran DimCurrency
PK BusinessDate
FactAcctTran PK CurrencyID
PK AccountID
PK BusinessDate
FactGL SourceCurrencyID
PK AccountID
CurrencyID
PK BusinessDateFactAccount
PK AccountID
CurrencyID PK BusinessDate FactCurrency
PK
CurrencyID AccountID
PK BusinessDate
FK1 CurrencyID PK,FK1 CurrencyID
Rates
Reporting Views
The calculation of the Presentation Balance/Amounts is done in the Reporting views, v_Account, v_GL,
v_AcctTran, v_GLTran. The presentation balance calculation is configured in
InsightWarehouse..DataDictionary.
For example
SELECT
[DimAccount].AccountId as AccountId
, [FactAccount].BranchId as AcctBranchId
, [FactAccount].Balance as Balance
-----
, ([FactAccount].ForeignCurrencyBal * [DimCurrency>FactCurrency].MultiplierRate) as
PresentationBalance
FROM
on
FactAccount.CurrencyId = [FactAccount>DimCurrency].CurrencyId
on
[FactAccount>DimCurrency].[CurrencyId] = [DimCurrency>FactCurrency].CurrencyId
Multi-Tenant Deployment
The Advanced Analytics platform can be setup in a multi-tenant environment. In R18, we do not have a
specific installer for multi-tenant deployment but there is a SQL package used for this purpose in which a
specific variable or flag will be set if we decide to avail of the multi-tenant option.
When the platform is initially installed with this option enabled, at least two tenants will be made available
by default, i.e. Tenant0 and Tenant1, both stored in the InsightSystem.dbo.Tenants table. The former will
store a user in charge of tenant administration while the latter can normally be used for business-as-usual
Analytics and BI16.
When external applications need to access the current database (e.g. Team Foundation Server), a number
of publish variables will be used to reference the correct tenant database. Eg. $(InsightSource) variable
will be replace with InsightSource_ABC during publish. It is important to remember that all code must use
variables instead of hardcoded database names.
Data values
Tables such as AttributeCalculations, CustomColumn and UpdateOrder may have hardcoded database
names. These will be changed in the post-deploy script by replacing the database names.
Scheduled Jobs
Separate scheduled jobs will be created for each tenant. Eg. Analytics ETL (ABC),
InsightLandingMaintenance (ABC) etc. Each step will point to the appropriate tenant database. This is
accomplished with a $(tenant) parameter in the Post Deploy script of TFS.
User Roles
Users’ roles are customized for the installation. This configuration can either consist of creating Server
Roles for each tenant or using Active Directory groups.
Publishing
The publishing process simply consists of the execution of an ad hoc powershell script. This manual publish
from Team Foundation Server requires all the $(database) parameters to be populated with the proper
tenant suffix. For publishing from a build, we should use the standard Framework/ModelBank build. A
new InsightPublishParameters.xml will include a Tenant parameter. The new PowerShell script
InsightPublishDatabaseTenant.ps1 will use the tenant parameter to change the target database names.
For the local layer, it is assumed the database name parameters in TFS will be set for each tenant in
separate local layers.
16
The Tenants table is thereafter managed through the Tenant Management option of the System Menu
in the Analytics Application front end. Please refer to the Analytics Web Front End User Guide for more
detailed information about this functionality.
Cubes
During publish a script will be created to rename the SSAS database name in the xml file. Connection
strings in TemenosSSASFunctions also must be changed to use the proper tenant suffix database.
Reports
As for cubes, we should change any report data sources to point to the appropriate database name
Application
Finally, we should change any application settings to point to the appropriate database name.
General Ledger
There are five objects (that include tables and a number of abstraction views) in InsightWarehouse that
are dedicated to storing General Ledger-, Profit & Loss- and Budget-related data: GL, GLConsolidated,
GLTran, GLAdjmt and Employee.
Dimensions of the GL, PL and Budget entries are hosted in the DimGL table while measures reside in
FactGL. Furthermore, GL transactions are stored in the FactGLTran table, FactGLAdjmt contains GL
Adjustments data and DimEmployee stores the information of the Department or of the Department
Account Officer (i.e. the Employee) managing a particular GL entry. This table stores, in general,
information about the hierarchical structure of a financial institution and will be used to identify the
employee or department in charge of specific accounts and customers in the Advanced Analytics Platform.
Finally, DimGLConsolidated and FactGLConsolidated contain dimensions and facts about all Chart of
Accounts consolidated entries, including those that do not have balance information.
Data Model
As we can see from the diagram below, FactGL is linked to DimGL and DimEmployee. FactGL and
FactGLTran are linked to FactGLAdjmt, while the GLConsolidated tables are connected to one another and
also to FactGLTran, FactGL, FactGLAdjmt and DimEmployee.
ThirdPartyId2 GLInsightAttribute10
GL Adjustments
This section covers the GL Adjustments functionality of Analytics and the Core banking specific configuration
required.
As we can see in
FactGL DimGL DimEmployee
GLConsolidatedId GLId EmployeeId
GLId SourceGLId SourceEmployeeId
BusinessDate Active BranchID
BranchId Added Active
BranchId2 Modified Added
AccountId UpdateReason Modified
CustomerId DeletedSCD2 UpdateReason
EmployeeId GLAccountNum DeletedSCD2
EmployeeId2 GLAssetType Department
CurrencyID GLBSAttribute1 Division
CurrencyID2 GLBSAttribute10 EmplFullName
CurrencyID3 GLBSAttribute2 EmplLastName
OrganisationID GLBSAttribute3 EmployeeNum
PortfolioID GLBSAttribute4 EmployeePosition
ProductID GLBSAttribute5 EmployeeStatus
SystemSecurityId GLBSAttribute6 EmployeeType
GLAmount GLBSAttribute7 EmplStartDate
GLForeignAmount GLBSAttribute8 NetworkUserName
GLBSAttribute9 ParentLevel1
GLCurrency ParentLevel1Code
GLDescription ParentLevel2
GLInsightAttribute1 ParentLevel2Code
ThirdPartyId2 GLInsightAttribute10
Figure 34, an adjustment must have a corresponding GL Fact. It will be linked to dimGL (always linked to
a GL Account). It will be linked to dimEmployee, to track which employee made this adjustment.
Multiple rows are created in FactGLAdjmt, between Business Date and Posted Date. For example, if an
adjustment is posted on Friday, meant to have taken effect on the previous Monday, then four rows will
be created in FactGLAdjmt, with Business Dates of Monday through Thursday.
Adjustments made in Core Banking will have a corresponding row in FactGLTran. The source of FactGLTran
and FactGLAdjmt is the same; the Adjustments are a subset of the GL Transactions, where Business Date
is earlier than Posted Date.
Should GL be extracted from a Third Party source system, local customization can be implemented to fit
this data into InsightWarehouse.
InsightETL No changes.
InsightSource For default data, the table dbo.GLAdjmt will hold GL Adjustments entered directly to
Insight, duplicated for all applicable dates.
RE_CONSOL_SPEC_ENTRY
STMT_ENTRY
CATEG_ENTRY
PC_CATEG_ADJUSTMENT
PC_STMT_ADJUSTMENT
The first three tables above contain each and every transaction coming for the current date from Core
banking while the last two tables prefixed with PC only contain records describing adjustment information.
Therefore, the method we use to identify adjustments is to reconcile the former three tables with the latter
two tables.
DW Export
Adjustments will only come from the Source System (Temenos Core banking), in the following DW Export
tables:
RE_CONSOL_SPEC_ENTRY
STMT_ENTRY
CATEG_ENTRY
PC_CATEG_ADJUSTMENT
PC_STMT_ADJUSTMENT
The GL Adjustment entries in these tables have already been made into multiple rows, as described above.
InsightImport
Post-Closing GL Adjustments, the only adjustment currently coded in the view sources, are contained in
five existing tables which are extracted from Temenos Core banking by DW Export:
RE_CONSOL_SPEC_ENTRY
STMT_ENTRY
CATEG_ENTRY
PC_CATEG_ADJUSTMENT
PC_STMT_ADJUSTMENT
For all these tables, GL Adjustments have SYSTEM_ID = ‘DC’, and VALUE_DATE < BOOKING_DATE.
There is a Core Banking module called Data Capture, where adjustments are manually added. This is why
we filter on ‘DC’. The Value Date represents the effective date, the date to which adjustments are to be
back-dated. The Booking Date represents the date the adjustment was entered into Data Capture.
There are three ways of identifying adjustments: to begin with, we can join the STMT_ENTRY table with
the PC_STMT_ADJUSTMENT – if the id of an STMT_ENTRY record is also found in PC_STMT_ADJUSTMENT,
this means that said record should be considered as an adjustment. Secondly, a JOIN can be established
between CATEG_ENTRY and PC_CATEG_ADJUSTMENT – if the id of a record found in the former is also
found on the latter, the record is an adjustment. Thirdly, we search for ids of records coming from
RE_CONSOL_SPEC_ENTRY in either PC_CATEG_ADJUSTMENT or PC_STMT_ADJUSTMENT – if an id coming
from the former matches with an entry in any of the latter tables, then we have found an adjustment.
InsightStaging
Views
A number of v_source views are used to populate FactGLAdjmt:
The views create multiple rows for a single adjustment. For example, if an adjustment was entered into
Data Capture on Friday, to be effective on the previous Monday, then four rows would be created, with
business dates of Monday, Tuesday, Wednesday, and Thursday, all with a Posted Date of Friday.
Stored Procedures
Three stored procedures are used specifically for GL Adjustments, there are similar to other
Extract/Transform/ Load stored procedures for other Transaction Facts.
s_FactGLAdjmt_Extract
s_FactGLAdjmt_Transform:
s_FactGLAdjmt_Load
InsightWarehouse
Views
The final calculation of the adjustment is done in InsightWarehouse views like v_GLConsolidated and v_GL.
For example:
, [DimGLConsolidated].GLThirdPartyAttribute3 as GLThirdPartyAttribute3
, [DimGLConsolidated].GLThirdPartyAttribute4 as GLThirdPartyAttribute4
, [DimGLConsolidated].GLThirdPartyAttribute5 as GLThirdPartyAttribute5
, [DimGLConsolidated].GLThirdPartyAttribute6 as GLThirdPartyAttribute6
, [DimGLConsolidated].GLThirdPartyAttribute7 as GLThirdPartyAttribute7
, [DimGLConsolidated].GLThirdPartyAttribute8 as GLThirdPartyAttribute8
, [DimGLConsolidated].GLThirdPartyAttribute9 as GLThirdPartyAttribute9
, [DimGLConsolidated].SourceApplication as SourceApplication
, [FactGLConsolidated].GLAmount as GLUnAdjmtAmt
, [FactGLConsolidated].GLForeignAmount as GLUnAdjmtForeignAmt
, [FactGLConsolidated].SystemSecurityId as SystemSecurityId
, case when DimGLConsolidated.GLSourceSystem = 'T24' then
ISNULL([FactGLConsolidated].GLAmount,0) + isnull([FactGLConsolidated>Adjmt].GLAdjAmtSum,0)
end as GLActualAmount
, isnull([FactGLConsolidated>Adjmt].GLAdjAmtSum,0) as GLAdjmtAmount
, isnull([FactGLConsolidated>Adjmt].GLAdjForeignAmtSum,0) as GLAdjmtForeignAmt
, [FactGLConsolidated].GLAmount + isnull([FactGLConsolidated>Adjmt].GLAdjAmtSum,0)
as GLAmount
, case when DimGLConsolidated.GLSourceSystem = 'Budget' then
ISNULL([FactGLConsolidated].GLAmount,0) + isnull([FactGLConsolidated>Adjmt].GLAdjAmtSum,0)
end as GLBudgetAmount
, [FactGLConsolidated].GLCredits + isnull([FactGLConsolidated>Adjmt].GLCredits,0)
as GLCredits
, [FactGLConsolidated].GLDebits + isnull([FactGLConsolidated>Adjmt].GLDebits,0) as
GLDebits
, [FactGLConsolidated].GLForeignAmount +
isnull([FactGLConsolidated>Adjmt].GLAdjForeignAmtSum,0) as GLForeignAmount
, [FactGLConsolidated].GLForeignCredits +
isnull([FactGLConsolidated>Adjmt].GLForeignCredits,0) as GLForeignCredits
, [FactGLConsolidated].GLForeignDebits +
isnull([FactGLConsolidated>Adjmt].GLForeignDebits,0) as GLForeignDebits
, [FactGLConsolidated].GLForeignInitialAmount +
isnull([FactGLConsolidatedLastBusinessDate>Adjmt].GLAdjForeignAmtSum,0) as
GLForeignInitialAmount
, [FactGLConsolidated].GLInitialAmount +
isnull([FactGLConsolidatedLastBusinessDate>Adjmt].GLAdjAmtSum,0) as GLInitialAmount
from
FactGLConsolidated
left join
DimGLConsolidated on
DimGLConsolidated.GLConsolidatedId = FactGLConsolidated.GLConsolidatedId
left join
DimBranch [factglConsolidated>DimBranch] on
FactGLConsolidated.BranchId = [factglConsolidated>DimBranch].BranchId LEFT
JOIN
(select GLConsolidatedId, BusinessDate,
sum(GLAdjmtAmount) as GLAdjAmtSum,
sum(GLAdjmtForeignAmt) as GLAdjForeignAmtSum,
max(GLAdjmtPostedDate) as AdjPostedDateMax,
ISNULL(sum(case when GLAdjmtAmount < 0 and GLAccountingDate = BusinessDate then
ABS(GLAdjmtAmount) end),0) as GLDebits,
ISNULL(sum(case when GLAdjmtForeignAmt < 0 and GLAccountingDate = BusinessDate then
ABS(GLAdjmtForeignAmt) end),0) as GLForeignDebits,
ISNULL(sum(case when GLAdjmtAmount > 0 and GLAccountingDate = BusinessDate then
ABS(GLAdjmtAmount) end),0) as GLCredits,
left join
(select GLConsolidatedId, BusinessDate,
sum(GLAdjmtAmount) as GLAdjAmtSum,
sum(GLAdjmtForeignAmt) as GLAdjForeignAmtSum
from [InsightWarehouse].dbo.FactGLAdjmt group by GLConsolidatedId, BusinessDate)
[FactGLConsolidatedLastBusinessDate>Adjmt] on
[FactGLConsolidatedLastBusinessDate>Adjmt].GLConsolidatedId =
FactGLConsolidated.GLConsolidatedId and
[FactGLConsolidatedLastBusinessDate>Adjmt].BusinessDate = Dates.BSLastBusinessDate
--Inner Join
--dbo.v_SysSecVisibleRecords on FactGL.SystemSecurityId =
v_SysSecVisibleRecords.SystemSecurityId
GO
This output below shows a sample result from v_GLConsolidated, including the GL Amount:
For all existing reports, GL Adjustments are automatically included in all amounts shown.
GL Adjustments are added to the corresponding GL Amount for that day and reported as GL Amount.
This is the original GL amount for the day, before any adjustments.This amount is not normally shown on
any reports.
Reports
All Financial Analytics reports provided Out-of-the-box will, by default, include the adjustments in any GL
Amounts displayed. The v_GLConsolidated view, which supplies GL data to the reports, will include the
adjustments in the existing GLAmount and GLForeignAmount fields. The adjustments will be grouped by
GL Account and Business Date and totaled.
Since the columns, GLAmount and GLForeignAmount include compound values, the fields below are crucial
to understanding the breakdown of individual components of the aforementioned figures
V_GLConsolidated would supply the data for General Ledger, Income Statement and Balance Sheet reports
and dashboards like the one below.
GL Consolidated
The GLConsolidated object consist of a Dim and a Fact tables that contains records for all the Chart of
Accounts (i.e. COA) entries, including the ones without balance.
InsightStaging
Views
A number of v_source views are used to populate the DimGLConsolidated and FactGLConsolidated tables,
e.g.:
Stored Procedures
Six stored procedures are used specifically for GL Consolidated, there are similar to other
Extract/Transform/ Load stored procedures for other Transaction Facts.
s_DimGLConsolidated_Extract
s_ DimGLConsolidated _Transform:
s_ DimGLConsolidated _Load
s_FactGLConsolidated_Extract
s_ FactGLConsolidated _Transform:
s_ FactGLConsolidated _Load
InsightWarehouse
Tables
As previously discussed, the two new tables added to InsightWarehouse to store GLConsolitation data are
DimGLConsolitation and FactGLConsolidation and they will inherit a number of columns that, in earlier
releases, belonged to the GL object i.e. all Banking System, Analytics and Third Party Attributes and GL
Number-related columns. The GLConsolidated tables also store the aggregated balances from CRF by
Company, Branch, GL Number and Currency and GL Adjustments at the GL Number level.
Views
The new GLConsolidated object involved the creation of three new abstraction views in the
InsightWarehouse database i.e. v_GLConsolidated, v_CubeDimGLConsolidated and
v_CubeFactGLConsolidated. The first of these views will expose the content of the GLConsolidated object
to the report layers while the second and the third one will allow it to be published to the GLConsolidated
SSAS Cube.
Cubes
As previously mentioned, a new Cube called GLConsolidated has also been created as part of the new
GLConsolidated object. This cube will automatically get installed as part of the Financial Analytics Content
Package.
Quick Reports
The following Quick reports use GLConsolidated as their main source of data.
Dashboards
All Retail Analytics dashboards currently rely on data from the GLConsolidated object.
Pivot Reports
The following Pivot reports use GLConsolidated as their main source of data.
Balance Sheet
Balance Sheet Movements
Balance Sheet MTD Analysis
Balance Sheet YoY Analysis
Balance Sheet YTD Analysis
Income Statement
Income Statement Movements
Income Statement MTD Analysis
Income Statement YoY Analysis
Income Statement YTD Analysis
Custom Reports
All Custom Reports currently rely on data from the GLConsolidated object.
Since the original directive was passed, there have been a series of rapid technological and business
advances which have brought new challenges to the use and protection of personal data. As a result, there
was a desire to produce an updated set of regulations to reflect the new technological and business
landscape.
In addition, the Lisbon Treaty created a new legal basis for a modernised and comprehensive approach to
data protection, including the free movement of data within the EU.
GDPR was designed to resolve three issues that have become apparent with the implementation of the
original legislation;
Since the Data Protection Directive, there has been an inconsistent approach to the application of
data protection across the European Union, which have created barriers for business and public
authorities due to legal uncertainty and inconsistent enforcement.
Difficulties for individuals to stay in control of their personal data
Gaps and inconsistencies in the protection of personal data in the field of police and judicial co-
operation in criminal matters.
As part of the free movement of data within the EU, the GDPR gives data subjects (typically EU citizens) a
new series of rights with respect to their data. These rights are listed below;
The key dates for the General Data Protection Regulation are;
This document provides a high-level solution for GDPR and include the following:
Rights of Erasure, a mechanism to capture and validate the request and trigger the erasure
process.This is done at the column level since different columns can have different required
retention periods.
Personal Data Definition, a mechanism to build, maintain and store the metadata definition of
Personal data for Analytics data stores and a mechanism to import the PDD from Core Banking
Page 266 | 335
Advanced Analytics Platform Technical Guide
Importing metadata from Core Banking in order to act upon requests for the Right to Erasure.
Consent Management.
Identifying personal data across all Analytics data stores
Sharing metadata details with all other Temenos products
Support L2 country layer to identify personal data
Support L3 Local development to identify personal data
In Scope
Overview
This section contains an overview of Right to Erasure processing as well as Consent Management.
Metadata to define the above erasures will be stored in Analytics and will be populated by Core Banking
extracts, as well as Analytics application metadata configured by administrators of the Temenos Core
Banking Customer Data Protection module (commonly abbreviated to CDP).
The Temenos Analytic Customer Data Protection solution for GDPR consists of the following components:
Populating Customer Data Protection Metadata into the Rules Engine Data Model
There are two processes, one for erasing raw Temenos Core Banking data extracts stored in InsightLanding,
and one for erasing columns in InsightWarehouse. The metadata for each process has the same form but
is populated differently.
Erasure Process
The erasure process will be triggered when the Action date in the above metadata is reached.
The action date is based on the Erasure data in the case of data from
CDP_DATA_ERASED_TODAY, or based on configured retention periods per column and the date
the Customer became inactive in the case of Analytics data.
Erasure will be able to run in validation mode so that it can be confirmed that a reasonable
amount of records are affected, and that the generated update statements are correct.
Once erasure is in progress status will be set in RuleCustomers to Erasure In Progress.
Consent Management
Customer consent preferences are managed in core banking. Analytics imports the metadata relating to
consent and makes it available to Analytics users both in raw format, as a lookup table in InsightSource
and as flags in InsightWarehouse..DimCustomer.
InsightImport is configured to import the new CDP consent tables. Two RulesEngine rules are added to
create a Consent lookup table in InsightSource, and then to add data from that lookup table to
InsightWarehouse..DimCustomer.
Consent data is imported via DW Export from Core Banking. These consent products are rolled up to the
customer level in Data Manager as a Has Consent flag and exposed in InsightWarehouse.
Rights Management
For the following Rights, automatic or manual Rights processing will be triggered once the
request is approved and authorised.
o Right to Erasure
o Right to Restrict processing
Customer activity will be stored in RuleCustomers. When customers have invoked rights in Core,
they will be entered in this table and a bridge table RuleCustomerRuleColumn will store the
personal data definition for this customer and the rights status for each column.
Considerations/Dependencies
The system is dependent on Customer Data Protection metadata originating from Temenos Core Banking.
RuleDefinitions
RuleColumns
RuleCustomers
RuleCustomersRuleColumns
RuleReplacements
CDPPurpose
Metadata
Metadata describing invoked rights to erasure is stored in the above data model.
Analytics Metadata
Metadata will be used to create Analytics rules which can then be used to generate logic to erase customer
data.
InsightWarehouse Erasures
Analytics Table Source Data Description
RuleDefinitions Datadictionary Contains the sensitive customer
table.
RuleColumns Datadictionary A group of sensitive columns in the
table above.
RuleReplacements Lookup table with default Contains the list of erase options
values. that can be applied to each field
based on the data type.
CDPPurpose Lookup table with default Contains retention period per
values. purpose code.
RuleCusomers CUSTOMER.ACTIVITY Contains customer right to Erasure
requests.
RuleCustomersRuleColumns CUSTOMER.ACTIVITY An intersection table between
rulecolumns and rulecustomers.
Contains the RuleCustomer id, the
RuleColumnId and the ActionDate
when the data from a particular
column should be erased. This is
calculated based on the purpose
code associated with a particular
column. This is customer specific
since each purpose code has a set
number of retention days based on
Page 271 | 335
Advanced Analytics Platform Technical Guide
Views
Views Description
v_ConvertCDPRules Maps T24 CDP_DATA_ERASED_TODAY to
RuleDefinitions and RuleColumns column
names.
v_ConvertCDPRuleCustomers Maps T24 to CDP_DATA_ERASED_TODAY
and CUSTOMER_ACTIVITY to
RuleCustomers column names.
v_ConvertCDPRuleCustomersRuleColumns Maps T24 CDP_DATA_ERASED_TODAY to
RuleCustomersRuleColumns column
names
v_ConvertCDPAnalyticsRulesDimensions Maps InsightWarehouse..Datadictionary
v_ConvertCDPAnalyticsRulesFacts CDP columns to RuleDefinitions and
RuleColumns column names
Mapping Details
RuleDefinitions
RuleColumns
RuleReplacements
CDPPurpose
RuleCustomers
RuleCustomersRuleColumns
Stored Procedures
The stored procedures above call the stored procedure s_LoadTemporalData which does the actual
inserting/updating into the target metadata tables. Loads can be Temporal or not. Temporal means that
all changes will be preserved by business date.
An update statement is generated for each column and for each customer. Updates will be generated based
on the ActionDate for a particular Customers Column. Only columns with action dates in the past will be
updated.
Update statements can be simple if the table to be updated contains a customer number, if not then bridge
tables need to be added to the statement in order to link to a table that contains the customer number so
that the query can be filtered by the appropriate customer.
@DatabaseName
The procedure would first be run as
@SchemaName
Exec s_ExecuteCDPRules ‘2018-05-
12’,’All’, ‘Validate’ @TableName
This will mark each generated
statement for each customer as
IsValidated = 1, with a validation
record count which is the amount of
affected records.
Calls
s_CreateCDPUpdateStatements
RuleResultsLog
Column Description
SQLStatement The generated SQL statement that will “Erase” customer
data.
RuleDefinitionID The associated rule for the columns to be erased.
CustNo The customer number for which data will be erased.
LeadCoMne The Company of the Customer.
IsApproved Has a generated statement been approved? (Manual
Process). This column needs to be manually updated after
approval is verified.
ValidationRecordCount The amount of records to be updated. This is updated on
Validation.
IsValidated 0 or 1. Set when s_ExecuteCDPRules is run in validation
mode.
HasExecuted 0 or 1. Has the statement been run and data has been
erased.
ActionDate The date that the column can be erased based on retention
policy.
ExecutedDate The date when the statement was executed and commited
Agent Job
An agent job will run the following daily:
At this point User approval is required due to the large impact of running a bad statement. Currently
approval would be done by means of review of RuleResultsLog table and manually marking the IsApproved
column as True. Once IsApproved and IsValidated are both true then the CDP Erasure update statement
will be executed.
Configuration
The metadata to erase customer data comes from T24 via DW Export and from columns in
InsighWarehouse..Datadictionary. A number of changes are required to existing Analytics installations in
order to produce the required data.
See TFS IM Framework / Configuration for the latest configuration values. All configuration data is in XML
files.
InsightImport
Add the following records to InsightImport in order to import CDP tables from T24.
RACT_LINK:C
OMPLETED_C
ONTRACT_CO
_CODE:CONTR
ACT_END_DA
TE,OTHER_LI
NKED_APPLN:
OTHER_LINKE
D_RECORD:O
THER_LINKED
_CO_CODE:OT
HER_LINKED_
REC_STATUS
CZ_CDP Dbo 1 Yes CZ.CDP. RECORD_ID:C CZ_CDP_DATA_ ModelBank_C
_DATA_ DATA.ER OMPANY_ID:F ERASED_TODAY DP
ERASED ASED.T IELD_NAME:P _RECORD_ID|FI
_TODAY ODAY URPOSE:ERAS ELD_NAME:PUR
E_OPTION:NE POSE:ERASE_OP
W_FIELD_VAL TION:NEW_FIEL
UE D_VALUE
TableNa SChema Enabled T24Rebu T24Sele T24Multi- T24Sub- Configuration
me Name _ ildFromS ctionNa valueAssociati valueAssociation
electionif me on
Missing
CZ_CDP Dbo 1 Yes CZ_CDP SYS_FIELD_N CZ_CDP_DATA_ ModelBank_C
_DATA_ _DATA_ AME:SYS_FIEL DEFINITION_SY DP
DEFINIT DEFINIT D_ATTRIBUTE S_FIELD_NAME|
ION ION S:SYS_PURPO SYS_PURPOSE
SE:SYS_ERASE
_OPTION:SYS
_ACCESSIBILI
TY:SYS_EXCL
UDE
CZ_CDP Dbo 1 Yes CZ_CDP NULL NULL ModelBank_C
_ERASE _ERASE DP
_OPTIO _OPTIO
N N
Run InsightImport
Exec [Insight].[s_Import_Control] @PathName = 'C:\Datafoler', @TableName = 'All',
@ReCreateTables = 1, @TableType = 'Regular', @SystemTablesExist = 0, @BatchNum = null,
@TotalThreads = null;
Lookup Tables
Configure the following lookup tables.
InsightETL..RuleReplacements
Default values should be populated when InsightETL is published. At a minimum the following values should
be present. This data is used for erasure of InsightWarehouse data only, not InsightLanding. The data is
used to determine what to replace erased data with.
InsightETL..CDPPurpose
Default values should be populated when InsightETL is published. At a minimum the following values should
be present. This data is used for erasure of InsightWarehouse data only, not InsightLanding. The data is
used to determine when InsightWarehouse data is erased.
TAX LEGAL 8D
CONSENT CONSENT 8D
InsightWarehouse..DataDictionary
Exec [dbo].[s_LoadCDPRules]
Exec [dbo].[s_LoadCDPRuleCustomers]
Exec [dbo].[s_LoadCDPRuleCustomersRuleColumns]
Exec [dbo].[s_LoadCDPAnalyticsRules]
This will execute defined erasures in Validate mode and will populate the table.
InsightETL..RuleResultsLog with Erasure Statements.
If the validate is successful RuleResultsLog.IsValidated will be set to 1 and if records exist for the customer
ValidationRecord count will have a non 0 positive value.
Final Execution
Once data has been validated the update statements can be run as follows:
Exec InsightETL.[dbo].[s_ExecuteCDPRules]
@BusinessDate = '2018-12-24',
@CustNo = 'all',
@ExecuteMode = 'Execute',
@DatabaseName = 'insightwarehouse',
@SchemaName = 'all',
@TableName = 'dimcustomer' –-or ‘all’ for All tables.
Consent Management
Datamodel
Table Details
This table will be populated by the RulesEngine using a Dataset rule and a SourceSQLStatement. The table
will be re-created every ETL load based on the contents of AA_Arrangement.
InsightSource.BS.CDPCustomerConsentList
Column Description
CustNo The customer Number
CompanyNo The customer company
LeadCoMne The company mnemonic of the customer
Product The consent product
StartDate The date consent was given.
EndDate The date consent was revoked.
InsightWarehouse.dbo.DimCustomer
Metadata
Temenos Core Banking
The following tables from Temenos Core Banking are used for Consent Management and will be used to
create the CDPCustomerConsentList lookup table.
Rule Definitions
Two RuleDefinitons in the Rule Engine will be defined in the Rules Engine to create the required table.
CDPCustomerConsentList, and add the required columns to DimCustomer.
Dataset rules with SQL statements will be used. The first rule to create the CDPCustomerConsentList lookup
table will have a SQL statement selecting from AA_Arrangement and the other CDP consent tables. The
second rule will be a rollup of the lookup table in order to add consent flags at the customer level to
InsightWarehouse..DimCustomer.
Configuration
The metadata to manage customer consent comes from T24 via DW Export. A number of changes are
required to existing Analytics installations in order to produce the required data.
InsightImport
Add the following records to InsightImport in order to import CDP tables from T24.
InsightLanding
The following records are added to InsightLanding in order to make the tables flow from InsightImport.
InsightWarehouse Datadictionary
New consent flags (Yes/No) are added to the datadictionary. For example.
Simple
RestrictProcessing
RestrinctMarketing
Detailed
BiometricConsent
EmailConsent
GeolocationConsent
LetterConsent
PhoneConsent
SMSConsent
RulesEngine Configuration
Two new rules are configured as follows.
RuleDefinitions
ExecutionStep An additional 1
filter as above.
IsPersisted 1 – Add a 1
physical column
to a base table.
0 – Create a
view around a
base table with
the column(s)
added to the
view.
Description Creates CDPCustomerConsentList Table
RuleDefinitionId Unique ID of
the Rule
definition.
LegacyItemId The ID of the
Rule in legacy
processes.
ItemName CDPCustomerConsentList
Operation Lookup Set to ‘Dataset’
Banding
Calculation
Split
MaxColumn
Dataset
Page 290 | 335
Advanced Analytics Platform Technical Guide
DatasetTable
id_comp_1,
Max(id_comp_3) AS ID_Comp_3
FROM
insightsource.[BS]
.aa_arr_cdp_consent ac1
GROUP
BY lead_co_mne,
id_comp_1) dtac
ON ac1.id_
comp_1 = dtac.id_comp_1
AND ac1
.id_comp_3 = dtac.id_comp_3) AC
ON A.[@id] = AC.id_comp_1
AND A.lead_co_mne = AC.lea
d_co_mne
INNER JOIN insightsource.[BS].[aa_arr
_cdp_consent_consent_type] ACT
ON AC.[@id] = ACT.[@id]
Detailed:
id_comp_1,
Max(id_comp_3) AS id_comp_3
FROM
insightsource.[BS]
.aa_arr_cdp_consent ac1
GROUP
BY lead_co_mne,
id_comp_1) dtac
ON ac1.id_
comp_1 = dtac.id_comp_1
AND ac1
.id_comp_3 = dtac.id_comp_3) AC
ON a.[@id] = ac.id_comp_1
AND a.lead_co_mne = ac.lea
d_co_mne
INNER JOIN insightsource.[BS].[aa_arr
_cdp_consent_consent_type] ACT
ON ac.[@id] = act.[@id]
INNER JOIN
insightsource.[bs].aa_arr_cdp_consent
_consent_type_consent_sub_type
ACTST
ON act.[@id] = actst.[@id]
AND act.sequence = actst.m
vsequence
Duplicate-checking
The second step of the process requires users to check the SourceSchema table in the InsightImport
database. This table contains a list of all the DW.EXPORT CSV files extracted from Temenos Core Banking
that should be loaded into InsightImport. Users can modify the SourceSchema table manually in SSMS,
through a SQL/T-SQL script, or with the aid of a version control tool like MS Team Foundation Server,
however this example relies on MS SSMS.
Users can run a query in SourceSchema to ensure that a definition for the Core Banking table they want to
import does not exist already and, hence, to avoid duplicates. An example of a query that searches in
SourceSchema a definition for a Core Banking table called ‘COLLATERAL_RATING’ is presented in Figure
38. If the query is executed successfully but it returns no result as shown below, users can proceed to the
Next step.
Else, if a SourceSchema definition exists but the table does not get properly imported in the database layer
during Analytics ETL/ Process Data ExStore, users will have to troubleshoot and investigate why the upload
fails. This process, however, is beyond the scope of this section.
A similar query should be also run from the ExtractList table of the Insightlanding database. As higlighter
earlier, this table contains the list of tables to be imported and archived in InsightLanding from
InsightImport and potentially also from other third party source systems.
Figure 39 shows an example of a SQL script that will insert in the SourceSchema table a new row for a
table called COLLATERAL_RATING. Please note that, if users want the table to be uploaded to
InsightImport, the value of the Enabled_ column must be set to ‘1’. Furthermore, they will have to set the
value of the Configuration column to ‘Local’ as this new column will belong to the Local Configuration layer
of the platform.
Figure 39 – Sample of insert statement to update SourceSchema with new table's definition
An important comment to make is that, in the example above, the SourceSchema columns used to define
parsing rules are not configured. If the imported Core Banking table included local reference fields, multi-
values or sub-values, we would need to include values for parsing-related columns in the INSERT INTO
script. Similarly, if we wanted to configure the table for Online extraction, the INSERT INTO statement
should also be configured to set the value of the OnlineProcess column to 1 for the current table.
Once the INSERT INTO query has successfully run, it is good practice to execute a SELECT query on the
SourceSchema table to ensure that we have entered the details of our Local configuration correctly, as
shown in Figure 40.
Figure 41 illustrates a sample INSERT INTO query to add a definition for the COLLATERAL_RATING table
to ExtractList. Please note that, in order to ensure that the table defined in ExtractList is actually stored in
InsightLanding during Process Data ExStore/ Analytics ETL, users need to set the Import Flag column to
Page 295 | 335
Advanced Analytics Platform Technical Guide
‘1’. In addition to this, we will have to set the value of the Configuration column to ‘Local’ for any new
column added to the Local configuration layer of a financial institution. Another important note is that the
statement shown below does not incude any configuration for online processing.
Figure 41 – Sample of insert statement to update ExtractList with new table's definition
Once the INSERT INTO query has successfully run, it is good practice to execute a SELECT query on the
ExtractList table to ensure that the details of our Local configuration entry have been entered correctly, as
shown below.
If Process Data ExStore is execute, the newly configured table – together with any other table defined in
SourceSchema and Extract List – will be first imported, data profiled and data typed in Insight Import and
then stored and archived in InsightLanding.
In addition to this, users can also run Analytics ETL to import the new table in InsightImport and
InsightLanding. This Agent job will also create a copy of the table in the InsightSource database. However,
if users also want to import one or more columns of our new table to one of the tables of the
InsightWarehouse database or to the OLAP Cubes, they will have to apply some additional configuration to
the Advanced Analytics Platform. A sample of this kind of configuration will be shown in the following
sections.
Post-Update Checks
The last step of the process is ensuring that the new table has been imported as expected in our Analytics
Platform.
InsightImport
In order to assess if the upload of the new table in InsightImport was successful, users can begin by
checking the content of the Insight.Entities table, as shown in the image below. This table contains a list
of tables to be processed and it is populated by the s_InsightImportTableList_Create stored procedure,
which is executed both during Analytics ETL and Process Data ExStore. Therefore, the newly added table
will only appear here if it was successfully stored in Insight Import
Figure 43 - Sample of MS SQL Query used to check if locally developed field has been imported in InsightImport
Then, users should then check the content of the Tables folder in InsightImport to ensure that our new
table is amongst the imported ones. Again, it should be noted that, had the definition of our table in
SourceSchema requested the parsing of one or more local reference fields, multi-values or sub-values,
multiple tables would appear in InsightImport > Tables. E.g. if the Core Banking COLLATERAL.RATING
table had one reference field to be parsed in Analytics, the InsightImport > Tables folder would store two
associated tables after the import process i.e. COLLATERAL_RATING and COLLATERAL_RATING_LocalRef.
In the example chosen here, however, not parsing was defined and hence only the COLLATERAL_RATING
table will appear.
Once users have ensured that our table or tables is/are in the InsightImport > Tables folder, they can also
expand the Columns folder of the table and check its content. If the columns displayed here match with
the columns of the corresponding CSV files and if they are assigned the proper data type, it means that
our new table structure is accurate and that was properly data-profiled.
The last check to perform in InsightImport is running a query on the imported table to ensure that all
records present in the corresponding CSV file were imported correctly, as shown in Figure 45.
InsightLanding
Users should now check that the new table has also been loaded into the InsightLanding database. To do
so, they can open the InsightLanding>Tables folder and look for our table’s name, e.g.
BS.COLLATERAL_RATING, where BS stands for Banking System (meaning Temenos Core Banking). The
naming convention for all Landing tables is <Source System Name>.<Table Name>.
If multiple tables had been created for COLLATERAL.RATING in InsightImport as a result of parsing, all
these tables would be also copied into InsightLanding. However, this is not the case in this example, as
shown in Figure 46.
We will have to query the new table in InsightLanding in order to ensure that the table in InsightLanding
is correctly populated. The BS.COLLATERAL_RATING table in InsightLanding will contain multiple days of
data. For this reason and due to the Columnstore Index, users should never query on this table through a
SELECT * statement. The select statement should explicitly include all the columns users are are interested
in, instead. The fastest way to query on a new InsightLanding table is, therefore, to right-click on it and
select the top 1000 records, as show in the image below. In this scenario, only one day of data has been
loaded so there will be no need to filter on MIS_DATE.
InsightSource
The InsightSource database will be updated with our newly imported table only if the Analytics ETL job is
executed in an Advanced Analytics Platform.
This database is used to integrate the latest data coming from potentially multiple source systems. Tables
in InsightSource will be replicas of the latest tables in InsightLanding, consequently the naming convention
for tables in this database should be <Source System Name>.<Table Name> e.g.
BS.COLLATERAL_RATING for our sample table. Figure 48 shows a partial screenshot of the InsightSource
> Tables folder when the COLLATERAL_RATING table has been properly loaded to this database.
Finally, we should run a query on the table imported in InsightSource to check that it has been populated
correctly with all records as shown in Figure 49.
Users can design a split operation through the tools of MS SQL Server Management Studio on the
InsightETL..AttributeCalculations table or with the aid of a version control tool like MS Team Foundation
Server, however the example used in this section relies on MS SSMS.
The following instructions will illustrate how to configure a split operation in the AttributeCalculations table,
stored in the InsightETL database, using the @ID column in the T24 Temenos Core Banking’s LIMIT table
as an example. Furthermore, we will test how the split process is carried out during Analytics ETL based
on this configuration.
Pre-Update checks
Before users start working on our Attribute Calculation, they should examine the structure of the column
to split by executing a query on the LIMIT table in the InsightImport database as shown in Figure 50.
As shown in Figure 50, the LIMIT @ID consists of a compound attribute which combines the values of
CUSTOMER @ID, LIMIT.REFERENCE @ID and a sequence number separated by a dot (‘.’). For this reason,
each value will be split into three parts and indicate the dot as a separator.
Also, users will have to ensure that a Split rule for this column does not exist yet in InsightETL, to avoid
duplicates. The query in Figure 51 investigates on whether other Attributes Calculations are already setup
for the LIMIT table and shows there are a number of Calculations defined for LIMIT in the InsightImport
and InsightSource databases within the ModelBank and Framework configuration layers. However, there is
no existing pre-defined @ID column split. Users can therefore proceed and configure the new split
operation.
If users want the column to be split during Analytics ETL or Process ExStore, the value of the Enabled_
column must be set to ‘1’. In addition to this, users will have to set the value of the Configuration column
to ‘Local’, as this new column will belong to the Local Configuration layer of the platform, and they have to
specify the target Database, Schema/ Table Name and Column in which the split should take place.
Figure 52 – Sample of insert statement to update AttributeCalcultation with new Split's definition
Once the INSERT INTO query has successfully run, it is good practice to execute a SELECT query on the
AttributeCalculations table to ensure that the details of the Local configuration were entered correctly, as
shown in Figure 53.
In general, a Split process can be carried out either by the Process Data ExStore Agent job or by the
Analytics ETL job.
More specifically, the Process Data ExStore Agent job contains a step called Insight Attribute Calculations
which will execute splits for columns stored in InsightImport tables. The Analytics ETL Agent job, instead,
includes the Insight Attribute Calculations-Import step that also applies splits to columns in the
InsightImport database’s tables. In addition to this, it also includes the Insight Attribute Calculations-Source
step that applies splits to columns in the InsightSource’s tables while any split in the InsightStaging
database will be looked after by the core InsightStaging Update ETL steps.
It should be noted that, the ModelBank and Framework configuration layers of the Data ExStore, Reporting
Platform and Advanced Analytics Platform only contain Split operations targeting the InsightImport,
InsightSource and InsightStaging databases. However, additional steps can be locally added to both agent
jobs to process any split operation applied to InsightLanding.
In our specific example, InsightSource is selected as this target database for the split. For this reason,
users will run the Analytics ETL job to process the split.
In case a full Analytics ETL has already been executed for the current business date and users do not want
to re-run it, it is possible to only execute the step in charge of processing the Split. This step consists of a
call to the InsightETL.[dbo].s_CreateColumnCalculations stored procedure using the name of the target
database as a first parameter. If we run this stored procedure as a standalone query, we can also specify
the target Table and Schema to process as a second and third input parameters, respectively.
Page 302 | 335
Advanced Analytics Platform Technical Guide
Post-Update Checks
Once the column split has been executed successfully, users can run a query on the target table to check
the end-result of the process. The query in Figure 54 selects the content of the BS.LIMIT table in
InsightSource because this is where our @ID split has taken place. Please note that the columns storing
the LIMIT @ID split information are called ‘@ID_POS1’, ‘@ID_POS2’ etc. because we assigned the value
‘POS’ to the ColumnSplitNameSuffix parameter in the AttributeCalculations definition for our Split rule.
Figure 54 - Sample of MS SQL Query used to check if locally developed split was execute correcty
Users can design a split operation through the tools of MS SQL Server Management Studio on the
InsightETL..AttributeCalculations table or with the aid of a version control tool like MS Team Foundation
Server, however the example used in this section relies on MS SSMS.
The following example shows how to configure a calculation which figures out the date in which a LIMIT
record was last updated, using the DATE_TIME column in the T24 Temenos Core Banking’s LIMIT table as
an example. Furthermore, it tests of how the calculation process is carried out during Analytics ETL based
on this configuration.
Pre-Update checks
Before we start working on our Attribute Calculation, we should examine the structure of the column we
want to use as a basis for our calculation by executing a query on the LIMIT table in the InsightImport
database as shown in Figure 55.
DATE_TIME is an audit column present in most Temenos Core Banking tables. It includes infromation about
the date and time of the last change on a record – specifically, the date part is stored in the fist 6 digits of
the DATE_TIME value using a YYMMDD format e.g. if the value of our DATE_TIME column is 1705140709,
the date of the last change on the limit will be specified in the 170514 string, i.e. it will be the 14 th of May
2017. The expression to be define in the calculation will extract the date string from the DATE_TIME column
and load it into the LIMIT table into a new column called ETL_DATE in a YYYYMMDD format (e.g. the 14 th
of May 2017 will be stored as ‘20170514’).
After checking out the DATE_TIME column structure, users have to ensure that a Calculation rule used to
figure out the latest update date does not exist yet in InsightETL, to avoid duplicates. The query in Figure
56 will investigate on whether other Attributes Calculations involving the DATE_TIME column are already
setup for the LIMIT table. The query above shows us that there is no existing pre-defined calculation relying
on the DATE_TIME column. Users can therefore proceed with the new configuration.
Please note that, if users want a column to be calculated during Analytics ETL or Process ExStore, the value
of the Enabled_ column must be set to ‘1’. In addition to this, they have to set the value of the Configuration
column to ‘Local’, as this new column will belong to the Local Configuration layer of the platform, and they
have to specify the target Database, Schema Name and Table Name in which the calculation should take
place. Unlike in the split operation, users do not have to provide a target column name and a
ColumnSplitNameSuffix in this calculation definition – users have to include a value for the SQLExpression
and for the CalculationColumnName columns, instead, which will store the T-SQL expression used for the
calculation and the name of the new column where the results of the calculation will be stored, respectively.
Figure 57 – Sample of insert statement to update AttributeCalcultation with new Calculation's definition
Once the INSERT INTO query has successfully run, it is good practice to execute a SELECT query on the
AttributeCalculations table to ensure that details of our Local configuration were entered correctly, as shown
in Figure 58.
Calculation can be carried out either by the Process Data ExStore Agent job or by the Analytics ETL job,
depending on the target database selected for the calculation rule.
More specifically, the Process Data ExStore Agent job contains a step called Insight Attribute Calculations
which will execute calculations for columns stored in InsightImport tables. The Analytics ETL Agent job,
instead, includes the Insight Attribute Calculations-Import step that also applies calculations to columns in
the InsightImport database’s tables. In addition to this, it also includes the Insight Attribute Calculations-
Source step that applies calculations to columns in the InsightSource’s tables while any calculations in the
InsightStaging database will be looked after by the core InsightStaging Update ETL steps.
It should be noted that, the ModelBank and Framework configuration layers of the Data ExStore, Reporting
Platform and Advanced Analytics Platform only contain calculations targeting the InsightImport,
InsightSource and InsightStaging databases. However, additional steps can be locally added to both agent
jobs to process any caluclation applied to InsightLanding.
In the specific case of our calculation example, however, InsightSource is the target database. Therefore,
users should use the Analytics ETL Agent job as it includes the Insight Attribute Calculations-Source step
that applies Calculations to columns in the InsightSource’s tables.
The alternative, if we do not want to execute a full Analytics ETL is to just run the step in charge of
processing the calculation. This step consists of a call to the InsightETL.[dbo].s_CreateColumnCalculations
stored procedure using the name of the target database as a first parameter. If we run this stored procedure
as a standalone query, we can also specify the target Table and Schema to process as a second and third
input parameters, respectively.
Post-Update Checks
Once the calculation has been executed successfully, users can run a query on the target table to check
the end-result of the process. Figure 59 shows a query selecting the content of the BS.LIMIT table in
InsightSource because this is where the date calculation has taken place. The columns storing the date of
the last change in the LIMIT’s record is called ‘ETL_CHANGE_DATE’ because this value was assigned to the
CalculationColumnName parameter in the calculation definition.
Figure 59 - Sample of MS SQL Query used to check if locally developed calculation was execute correcty
This section will take for granted that the Budget data used is already suitable to be imported into Insight
Landing – the kind of Budget data processing needed may vary depending on the original format of Budget
data and should be discussed with the Analytics Product team.
In this example, all Budget data is stored in a single table called GLBudget. GLBudget, in turn, is structured
into 16 columns, whose primary key is the LINE_ID (see Figure 60). This is the standard Budget format
for which Analytics Platforms provide out-of-the box views, tables and columns within the ModelBank
configuration layer.
Updating ExtractList
ExtractList is a configuration table storing the definition and the extract configuration parameters for all the
tables to be stored and archived into InsightLanding, loaded from either InsightImport or from third party
source systems like Budget.
Before updating ExtractList, it is good practice to ensure that a Budget table definition does not exist
already, in order to avoid duplicates. Figure 61 shows a sample of a quety to check this.
Once users have ensured that no existing ExtractList definition exists, they can create one similar to the
sample of INSERT INTO statement illustrated in Figure 62.
If the table has to be uploaded to InsightLanding, the value of the ImportFlag column must be set to ‘1’.
Furthermore, the value of the Configuration column should be set to ‘Local’ as this new column will belong
to the Local Configuration layer.
Once the INSERT INTO script has been run successfully, users should query again the Extractist table to
ensure the new entry has been added correctly, as shown below.
Updating ExtractSourceDate
The ExtractSourceDate table stores the queries that are used to retrieve the current extract date from the
source system data. The date returned from the query will be used to create the date portion of the schema
used for each table stored in Insight Landing. One record for each source system defined in the Extract
List table is required and for this reason we need to create a new entry also for the Budget source system.
Users should check the existing content of ExtractSourceDate in MS SSMS before adding any new entries,
to avoid duplicates. To do so, they can use a query similar to the one illustrated in Figure 64 – for each
entry, the name of the source system is identified in the SourceName column and a Budget-related
definition does not exist yet in this example.
Once ensured that no duplicates exist, users can input a Budget-related entry executing a script similar to
the one showed in Figure 65.
When the script to update ExtractSourceDate has executed successfully, users hsould query again this table
to ensure that all columns were populated correctly (see Figure 66).
Once the new step has been created, it will have to be positioned just after the InsightLanding CSI BS
Update step, targeting the BS (i.e. Banking System) source system.
Once these steps run successfully, the content GLBudget table will be copied to the InsightLanding
database, together with a Budget-specific instance of the SourceDate table. The schema for both these
tables will be the string BUDGET as shown in Figure 69.
Moreover, a copy of the latest Budget tables in InsightLanding will be also loaded in InsightSource with
schema ‘Budget’ – both in InsightLanding and in InsightSource, the columns and structure of the GLBudget
table will reflect the columns and structure of the table with the same name within the Budget source
system, as shown in Figure 70.
Please note that, even if we executed a full Analytics ETL, Budget data will be only loaded up to the
InsightSource database, at this stage, and not fully processed to the InsightWarehouse database. The
reason for this is that we have not performed the necessary configuration in InsightStaging yet – this
configuration will be illustrated in the next section of this document.
Updating Systems
The Systems table in InsightStaging controls which source systems are included in the Analytics ETL
process. For this reason, it is essential that we configure an entry for the Budget to ensure that data is
extracted from this source system, then transformed and loaded into the InsightWarehouse database.
Querying the content of Systems as in Figure 71, users will see that an entry for Budget already exists.
However the Enabled_ column is set to 0 for this record, i.e. the processing of Budget within Analytics ETL
is disabled.
If using MS SSMS to customize Systems, a simple UPDATE statement can be used to enable the Budget
source system, as highlighted in the following picture.
Figure 72 – Sample of update statement to enable ETL processing for Budget data
Once the UPDATE statement has been executed, it is good practice to rerun a query on Systems to ensure
that the Budget entry was edited correctly.
v_Source Views
The Advanced Analytics Platform’s ModelBank includes an out-of-the-box v_source view used to map
Budget source columns against target GL-related tables and columns in the InsightWarehouse database.
This out-of-the-box v_source view for Budget is called v_sourceGLBudget and users can check out the
script of this view by opening in in ALTER mode, using a new query window, as shown in Figure 74.
Important note: in case the out-of-the-box Budget v_source view requires to map
updates or does not include some important columns of the Budget source system,
users can create a new locally developed v_source view. This will also impact the
InsightStaging..UpdateOrder table (that will have to be updated to enable ETL
processing of the new view) and the InsightWarehouse..DataDictionary table (that
will be updated to include the new columns in the existing GL and GLConsodlidated
objects). Please refer to the Adding a New Column to the InsightWarehouse database
with SQL Script section for further information about this process.
If data for the current business date has already been imported from Temenos Core Banking and users
just need to run an import for Budget data, they do not need to execute a full Analytics ETL but can start
from the InsightLanding CSI BS Update.
Post-Update Checks
The last step of the process is to ensure that the new table has been imported as expected in the Analytics
Platform.
InsightWarehouse
Budget entries will be stored in the GL and GLConsolidated-related tables and views. In order to ensure
that these entries have correctly been added and populated users can execute the following queries on the
v_GLConsolidated view in the InsightWarehouse database.
First, users can filter those records for which the GLBudgetAmount column has been populated, i.e. the
Budget entries.
Secondly, users can run a query on all the GLConsolidated entries for the same GL Account (e.g. for an
account with GLNum = MBPL.0030) as shown in Figure 76.
This query above will return a list of records from Temenos Core Banking and only one record from the
Budget source system (i.e. the record whose GLBudgetAmount column is not null) – the latter will define
the total monthly budget amount for GL Account considered by the query.
When a new column is added to the Advanced Analytics Platform, it can be either directly imported from a
Source System or it can be calculated within the platform itself. In case the column is calculated, its value
can be either the result of a split/calculation defined in the AttributeCalculations table of InsightETL or of
a business rule also stored in InsightETL but defined through the Data Manager feature.
Columns defined through AttributeCalculations are normally added to a copy of a source system table either
in the InsightImport database (and the whole table is then copied to InsightLanding and to InsightSource
through Analytics ETL) or they are directly added in the InsightSource database. In either case, they will
be displayed as standard columns of a source system table in InsightSource, right before being extracted,
transformed in InsightStaging and loaded into InsightWarehouse like standard Source System columns.
Splits and calculations can be also added to other databases but additional customization would be required
in this case.
Columns defined through Data Manager’s business rules, instead, can be applied by default to tables
residing in the InsightLanding, InsightSource and InsightStaging databases or to abstraction views in
InsightWarehouse. The types of rules that can be designed through Data Manager are banding, calculation,
Custom Table, Dataset and Lookup. These rules will be directly applied to the appropriate database through
dedicated steps in Analytics ETL.
The configuration of columns directly extracted from a Source System is very similar to the configuration
of columns defined through InsightETL..AttributeCalculations and users will group process these two kind
of columns under the same way. A separated section, instead, will be dedicated to Data Manager business
rules and to the columns resulting from them.
The configurations illustrated in this section will be relying on the tools of MS SQL Server Management
Studio but new columns can be added to InsightWarehouse also through a source code management
system like MS Team Foundation Server.
Important note: new columns’ names should be less than 15 characters long and
comply with Analytics and SQL naming conventions.
As previously mentioned, the instructions in this section apply both to columns directly imported from a
Source System and to columns obtained through a business rule defined in InsightETL through
AttributesCalculations. To illustrate these two scenarios, the examples used will be the ANNUAL_BONUS
column and the ETL_ANNUAL_BONUS column. The former is a column we can find in the CUSTOMER table
in InsightImport and it contains data from the ANNUAL.BONUS field in CUSTOMER, a Temenos Core
Banking table. The latter is a column whose value is obtained from a calculation defined in
InsightETL..AttributeCalculations and that will be added to the CUSTOMER table in InsightSource (that
would apply also to business rules executed in InsightImport or InsightLanding, as long as they are reflected
in InsightSource), as shown in the prevous sections.
Pre-Update checks
Before users start working on importing our new column, they should ensure that it is available in the
InsightSource database. In both the examples considered, the columns to be imported belong to the
CUSTOMER table and Figure 78 shows two queries that can be used to ensure that either ANNUAL.BONUS
or ETL_ANNUAL_BONUS have been stored into InsightSource correctly.
The second check users need to perform is on the DataDictionary table in the InsightWarehouse database,
which contains a definition for all columns of data tables or views in InsightWarehouse.
As previously explained, the new column to be added is called AnnualBonus and it could be mapped either
against a column directly imported from the source system (e.g. ANNUAL_BONUS) or against a calculated
column (e.g. ETL_ANNUAL_BONUS). Users need to ensure that a definition for AnnualBonus does not exist
already in DataDictionary to avoid duplicates and they can use the following query to do so.
Configuring InsightStaging
Users should also configure the InsightStaging database so that the newly added column is populated
correctly when the Analytics ETL flow is executed. Firstly, users will have to create a new v_source view to
map the new column. Afterwards, they will need to create new entries within the UpdateOrder table so
that this new v_source view is taken in consideration within the dataflow of the Analytics ETL agent job.
Please note that these two steps are only required if no locally configured v_source exist for a specific
object (e.g. Customer). Otherwise, new columns for the object can just be added to the existing local
v_source view and no new UpdateOrder entry will be needed.
then enable the latter and disable the former. This section will illustrate the third option as this one allows
to both develop locally configured v_source views and to keep a backup of the original v_source views
provided out-of-the-box by Temenos.
The locally configured v_source view will be called v_sourceCustomerBS_BNK and it can be designed using
the script of the v_sourceCustomerBS view as a basis.
To copy the script of v_sourceCustomerBS into the new view, users can open MS SQL Server Management
Studio, then right-click on the v_sourceCustomerBS and select the option Script View As > CREATE To >
New Query Editor Window, as shown in Figure 80.
Once the query has been opened in a new window, users should modify the view name to
v_sourceCustomerBS_BNK, as shown in Figure 81.
Configuration for Columns resulting from InsightETL rules (designed through Data
Manger or AttributeCalculations)
Users should then append a new local configuration section, like the following, that includes the source-to-
target mapping for the new column. In the example shown in Figure 82, the ETL_ANNUAL_BONUS column
designed through AttributeCalculations is used but the mapping for a source system column like
ANNUAL_BONUS would be the same as shown in Figure 83.
The same ‘Local Changes’ (or ‘Local Configuration’) section can also be used, in a later stage, for other
locally developed Customer columns, if needed.
Figure 82 – Adding locally developed columns to locally developed v_source view in InsightStaging
Figure 83 – Adding source system columns to locally developed v_source view in InsightStaging
Once users have applied the changes highlighted above, they can execute the script – this will both check
for errors and generate the new v_sourceCustomerBS_BNK view under InsightStaging>Views.
To enable the locally developed v_source view, users can execute an INSERT INTO statement similar to
the one shown in Figure 85.
Figure 85 – Sample of insert statement to enable the ETL processing of the locally developed v_source view in
InsightStaging..UpdateOrder
Figure 86 – Sample of update statement to disable the ETL processing of the out-of-the-box v_source view in
InsightStaging..UpdateOrder
Configuring InsightWarehouse
Lastly, users have to update the DataDictionary table in InsightWarehouse. DataDictionary stores the
definitions for all columns in the Dim, Fact and Bridge tables and for all the abstraction and data source
views (with schema Cubes) within the InsightWarehouse database. Therefore, if users want to add a new
column to the Warehouse, they need to include its definition within DataDictionary.
As previously discussed, AnnualBonus belongs to the Customer object, i.e. it should be included either in
the DimCustomer or in the FactCustomer table of the InsightWarehouse database in our Advanced Analytics
Platform. In general, both columns directly extracted from a source system and columns resulting from
InsightETL rules can be added either as Dim or as Fact columns. Whether a column should be set up as a
Fact or as a Dimension is debatable and depends on how often the value of such column is likely to change
and on the preferences of the bank requesting this local configuration column to be set up. In this
document, the AnnualBonus will be set up as a Fact column to provide an example of how Fact definitions
should be defined in DataDictionary. An example of Dim definition will be also provided in this document.
As shown in Figure 88, the INSERT INTO statement will set up three new entries. These are the definition
AnnualBonus as a new column in the FactCustomer table, the definition of AnnualBonus as a new column
in the v_Customer view (that is part of the abstraction layer of InsightWarehouse) and the definition of
AnnualBonus as a new column in the v_CubeFactCustomer view (this is also part of the abstraction layer
of InsightWarehouse used for the update of the SSAS Cubes Measures). This means that the AnnualBonus
new column will not only be available in the Warehouse but it can also be exposed to Reports and SSAS
Cubes.
It is required to execute the routines above in order to update any Dim, Fact or Bridge table and any
abstraction or data source views in InsightWarehouse, whenever a column definition is added in
DataDictionary.
InsightWarehouse Checks
Users should check that the new entries were correctly included in the DataDictionary table by running a
query similar to the one shown in Figure 89.
In addition to the three new manually created entries in DataDictionary, that had the Configuration value
set to ‘Local’, the query output will show an extra definition for the AnnualBonus column, with Configuration
set to ‘Combined Configuration’. This was generated by the s_DDCombinedRecords_update stored
procedure.
To ensure that the structure of the FactCustomer table has been amended correctly by the
s_TableStructureFromDD_update stored procedure, users can run a query on the FactCustomer table as
shown in Figure 90.
Figure 90 – Sample of query that checks if locally developed fields were added to InsightWarehouse’s table
The query output shows that the AnnualBonus column has been correctly included in the FactCustomer
table but it has not been populated yet – the reason behind this is that Analytics ETL has not run after the
DataDictionary update.
The same applies to the v_Customer and the v_CubeFactCustomer views – if the
s_ViewStructureFromDD_update stored procedure has been executed correctly, the AnnualBonus column
will be added to these two views but not populated yet, as shown in Figure 91.
Figure 91 – Sample of queries that checks if locally developed fields were added to InsightWarehouse’s views
Once Analytics ETL has been executed correctly, we can check in the Annual Bonus column has been
populated correctly in the FactCustomer table and in the v_Customer and v_CubeFactCustomer views.
Post-Update Checks
Figure 92 shows samples of queries that can be used to ensure that values for the AnnualBonus column
were populated correctly across the target table and views in the InsightWarehouse database. The WHERE
clause for all the SELECT statements below is filtering results based on the current business date as the
AnnualBonus column has not been imported for any past date.
Figure 92 – Sample of queries that checks if locally developed fields were populated correctly in InsightWarehouse’s
tables and views
This is a feature of the Analytics web front end that will update the Rules Engine’s tables stored in the
InsightETL database. Data Manager is used to define different types of business rules that, for example,
lookup InsightWarehouse columns (e.g. a distinct list of Product Codes is mapped to a table containing
Classification and Category), create banding columns (e.g. the AgeGroup column based on the value of the
column Age), assign specific values to a column through SQL functions or coding etc. For the Lookup and
Banding rules set up in Data Manager, we also need to specify the values to be mapped and the available
bands to be used, respectively (for more information about the available rules types in Data Manager,
please refer to the Analytics Application Web Front End User Guide).
Furthermore, Data Manager allows the rules designer to set up business rules that will be applied, during
Analytics ETL, to tables in the InsightLanding, InsightSource and InsightStaging databases. Through this
facility, we can also update the abstraction views in the InsightWarehouse database, even though it is not
possible to apply business rules directly to Bridge, Dim and Fact tables in this database. If users want to
apply Data Manager’s business rules to InsightWarehouse tables, then, they should apply these rules to
InsightStaging and then map the new fields resulting from them to InsightWarehouse columns and, in case
of new columns, users will have to configure them in the Warehouse.
During Analytics ETL, the business rules defined through Data Manager will be processed and applied to
the appropriate database tables (i.e. the temporary tables in the InsightStaging database, in the example
considered in this section). If users want these values to be loaded into the InsightWarehouse database,
they will have to create a corresponding definition in the DataDictionary table.
This section will use the AnnualBonusGroup column to demonstrate how to carry out the aforementioned
configuration process. The value of the new AnnualBonusGroup column will be determined by banding rule
based on the value of the previously added AnnualBonus column. This section will also demonstrate the
set up of a Dim column.
Pre-Update checks
Before configuring the new AnnualBonusGroup column, users need to ensure that the column on which the
grouping rule will depend upon, i.e. AnnualBonus, is correctly imported in the Advanced Analytics Platform.
If the source column which works as a basis for our business rule is part of the ModelBank or Framework
or Country ModelBank configuration, the source column will be available by default in the Advanced
Analytics Platform. If, like in this case, the source column is part of the Local Configuration Layer, we will
have to manually configure its import in the Advanced Analytics Platform. This process is described in detail
in the Adding a New Column from a Source System or from InsightETL to the Data Warehouse with SQL
Script (Fact Column) section.
The second check that needs to be performed is on the DataDictionary table in the InsightWarehouse
database that contains a definition for all columns of data tables or views in InsightWarehouse. If no
AnnualBonusGroup column definition exists, as shown in Figure 93, users will have to create it.
The last check required is to ensure that the new rule is not already available in Data Manager to avoid
duplicates. First, we should log onto the Analytics web front-end application, open the System Menu and
select the Data Manager option.
Once we have opened the Data Manager screen, we can use the search box on the top left hand-side of
the screen to query if a rule with a name similar to the one we are about to create, as shown in Figure 94.
In this sample, the query returns no data – this means that no Annual Bonus-related rule exists.
Remaining on the Data Manager screen, users should clear the criteria inputted in the search box then click
on the InsightStaging database on the Data Manager menu. This will ensure that the list of existing Data
Manager rules for this database are displayed in the centre of the screen and also it will make sure that
the new rule to be created is also added to InsightStaging. Then, users should click on the “New” button
on the left hand-side of the screen as shown in Figure 95.
This will bring up the Add new rule screen. This screen is divided into 4 tabs and users should fill in the
fields in the Rule General section as shown in Figure 96.
On the Source Data tab, users should type in the name of the StagingCustomer table as a SourceTable
then select the AnnualBonus column from the StagingCustomer table.
On the Custom Data tab, users should add the AnnualBonusGroup as a custom field.
On the Execution tab, users can leave the default values set for the Execution Phase field, i.e. Extract, and
for the Execution Step, i.e. 1. These two parameters control when the current business rule will be executed
within the core part of Analytics ETL so, in case we are setting up a business rule that is dependent on
another rule, the parameters on the Execution tab should be updated accordingly. E.g. if the current
business rule were dependent on a parent rule with Execution Phase = Extract and Execution Step =1, the
child rule should have Execution Phase = Extract (or more) and Execution Step = 2 (or more). Once the
Execution tab has been filled in, users should save the record.
Page 330 | 335
Advanced Analytics Platform Technical Guide
As soon as we hit Save, the Data Mappings tab will appear. Users can click on this tab and click on the Plus
button to define the banding groups for Annual Bonus Group and also the mapping between these groups
and values in Annual Bonus. These bands and mapping should be defined as shown below.
Configuring InsightWarehouse
We have to add new definitions for the AnnualBonusGroup column in the DataDictionary table in
InsightWarehouse. As we know, DataDictionary stores the definitions for all columns in the Dim, Fact and
Bridge tables and for all the abstraction and Cubes views within the InsightWarehouse database. Therefore,
users should add a new entry in DataDictionary for each table or view where the column AnnualBonusGroup
should appear.
In the previous section, the AnnualBonus was classified as a Fact column. The new AnnualBonusGroup
column, instead, will be classified as a Dimension to provide an example of how Dim definitions should be
defined in DataDictionary.
Figure 101 – Sample of insert statement to add a new Fact definition in InsightWarehouse..DataDictionary
The sample above includes three new entries. First, the definition AnnualBonusGroup as a new column in
the DimCustomer table; then, the definition of AnnualBonusGroup as a new column in the v_Customer view
(which is part of the abstraction layer of InsightWarehouse); and, finally, the definition of
AnnualBonusGroup as a new column in the v_CubeDimCustomer abstraction view, used for the update of
the SSAS Cubes Measures. This means that the AnnualBonusGroup new column will not only be available
in the Warehouse but also exposed to Reports and SSAS Cubes.
If the three entries above are compared against the previously inputted definitions for the AnnualBonus
Dim column, one important difference appears in the structure of the SQL statements used: when users
define a new column for a Dim table, they will also have to specify a value for SCDType in the DimCustomer-
specific record. This parameter defines how dimension changes should be handled by InsightWarehouse
and this parameter should be always set to NULL in Fact column definitions – please refer to the Advanced
Analytics Platform Technical Guide.
If the INSERT statement runs without any errors, users can move to the next step.
It is required to execute the routines above in order to update any Dim, Fact or Bridge table and any
abstraction or data source views in InsightWarehouse, whenever a column definition is added in
DataDictionary.
InsightWarehouse Checks
Users should check that the new entries were correctly included in the DataDictionary table by running a
query similar to the one shown in Figure 102
Figure 102 – Sample of query that checks locally developed definitions in InsightWarehouse..DataDictionary
It should be noted that, in addition to the three new entries we defined in DataDictionary, that had the
Configuration value set to ‘Local’, there is also an extra definition for the AnnualBonusGroup column, with
Configuration set to ‘Combined Configuration’. This was generated by the s_DDCombinedRecords_update
stored procedure.
To ensure that the structure of the DimCustomer table has been amended correctly by the
s_TableStructureFromDD_update stored procedure, users can run a query on the DimCustomer table as
shown in Figure 103.
Figure 103 – Sample of query that checks if locally developed fields were added to InsightWarehouse’s table
The query output shows that the AnnualBonusGroup column has been correctly included in the
DimCustomer table but it has not been populated yet as we haven’t run Analytics ETL.
The same applies to the v_Customer and the v_CubeDimCustomer views – if the
s_ViewStructureFromDD_update stored procedure has been executed correctly at the end of the publishing
process, the AnnualBonus column will be added to these two views but not yet populated, as shown in
Figure 104.
Figure 104 – Sample of queries that checks if locally developed fields were added to InsightWarehouse’s views
Once Analytics ETL has been executed correctly, we can check in the Annual Bonus column has been
populated correctly in the DimCustomer table and in the v_Customer and v_CubeDimCustomer views.
Post-Update Checks
Figure 105 shows samples of queries that can be used to ensure that values for the AnnualBonusGroup
column were populated correctly across the target table and views in the InsightWarehouse database. The
WHERE clause for all the SELECT statements below is filtering results based on the current business date
as the AnnualBonusGroup column has not been imported for any past date.
Figure 105 – Sample of queries that checks if locally developed fields were populated correctly in InsightWarehouse’s
tables and views