0% found this document useful (0 votes)
48 views33 pages

Ibm Itx

The document outlines key interview questions and topics related to IBM ITX and WTX roles, focusing on data transformation concepts, map design, performance optimization, and integration with other IBM tools. It also covers specific functionalities of WTX, such as error handling, security features, and deployment strategies in various environments. Additionally, it discusses advanced topics like parallel processing, version control, and the use of WTX in Service Oriented Architectures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views33 pages

Ibm Itx

The document outlines key interview questions and topics related to IBM ITX and WTX roles, focusing on data transformation concepts, map design, performance optimization, and integration with other IBM tools. It also covers specific functionalities of WTX, such as error handling, security features, and deployment strategies in various environments. Additionally, it discusses advanced topics like parallel processing, version control, and the use of WTX in Service Oriented Architectures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

When interviewing for an IBM ITX (Transformation Extender) or ITXA (Transformation

Extender Advanced) role, expect questions focused on your understanding of data


transformation concepts, the ITXA platform features, map design, performance optimization,
integration with other IBM tools, and real-world application scenarios within complex data
integration projects; some potential questions include:
General ITXA Concepts:
Explain what ITXA is and its primary function within the IBM ecosystem.
What are the key differences between ITX and ITXA?
Describe the different types of transformations supported by ITXA (e.g., data type
conversion, data filtering, data aggregation).
Map Design and Implementation:
How do you design a complex data transformation map in ITXA, including
considerations for input/output data formats and data validation?
Explain the concept of "Type Designer" in ITXA and its role in creating data
structures for transformations.
How would you handle nested data structures within a transformation map?
Performance Optimization:
What strategies can you use to optimize the performance of an ITXA
transformation map?
Discuss the role of parallel processing in ITXA and when to utilize it for
performance enhancement.
Integration with Other IBM Tools:
How can you integrate ITXA with IBM Integration Bus (IIB) to build a
comprehensive data integration solution?
Explain how you would use ITXA to extract data from a database and load it into a
data warehouse using IBM DataStage?
Real World Scenarios:
Describe a scenario where you would use ITXA for data cleansing and
standardization in a large-scale data integration project.
How would you handle error handling and exception management in an ITXA
transformation process?
Explain how you would implement version control and change management for
ITXA maps in a team environment.
Technical Questions:
What are the different types of data sources that ITXA can connect to?
Explain the concept of "enveloping" and "de-enveloping" data within ITXA and its
use cases.
How do you configure ITXA to monitor transformation execution and identify
potential issues?

1. What is IBM WTX and what are its primary functions?


IBM WTX, also known as IBM WebSphere Transformation Extender, is a data transformation
tool designed to automate the processing and integration of complex data formats across
different systems. Its primary functions include data validation, transformation, and routing
to facilitate seamless data integration and communication between disparate systems.
2. How does WTX handle different data formats?
WTX supports a wide range of data formats including XML, EDI, flat files, and relational
databases. It uses type trees and maps to define and convert between these different
formats, ensuring accurate and efficient data processing.
3. Explain the concept of type trees in WTX.
Type trees in WTX define the structure of input and output data formats. They act as
templates that describe how data is organized and are essential for the transformation
process, providing a graphical representation of data types and hierarchical structures.

4. What is a map in WTX and what is its role?


A map in WTX is a specification that defines how data from one or more sources is
transformed and sent to one or more targets. It uses the structures defined by type trees to
map data elements from the source to the destination, applying transformation logic as
necessary.
5. Can you describe how WTX can be integrated with other IBM products?
WTX integrates with various IBM products such as IBM Integration Bus and IBM DataPower
Gateway. This integration allows for powerful and flexible data transformation capabilities
within broader business process management and data handling scenarios.
6. What are the components of a WTX deployment architecture?
A typical WTX deployment architecture includes the Design Studio for developing
transformations, the Launcher for executing maps, the Command Server for remote map
execution, and the Integration Flow Designer for integrating with other applications.
7. Describe the role of the WTX Launcher.
The WTX Launcher is used to schedule and run maps on a variety of platforms. It can
execute transformations as standalone processes or be invoked by external applications,
providing flexibility in how and when data transformations occur.
8. How does WTX manage error handling?
WTX provides robust error handling capabilities, allowing developers to define custom error
processing logic in maps. It can catch and log errors, halt processing, or route data to error
handling routines based on the severity and type of error encountered.
9. What are some security features available in WTX?
WTX supports several security features such as data encryption, secure FTP, and integration
with secure protocols like HTTPS and IBM MQ over TLS. These features help protect data
integrity and confidentiality during transformation processes.
10. How can performance be optimized in WTX transformations?
Performance in WTX can be optimized by efficient map design, minimizing the use of
external calls and complex functions, and leveraging parallel processing where possible.
Optimizing the underlying data structures and utilizing efficient lookup methods also
contribute to better performance.
11. Explain the concept of adapters in WTX.
Adapters in WTX are components that facilitate the interaction between the transformation
engine and external systems or formats. They provide the necessary interface for reading
from and writing to different databases, applications, or data formats.
12. What is the purpose of the Audit Database in WTX?
The Audit Database in WTX is used to log details about data transformations, including
execution times, errors, and operational metrics. This information is valuable for monitoring,
debugging, and optimizing transformation processes.
13. How does WTX support XML processing?
WTX provides extensive support for XML processing, including the ability to parse, generate,
and transform XML data. It uses XML schema definitions to validate and map XML structures
within transformation processes.
14. Describe a typical use case for WTX in a business environment.
A typical use case for WTX in a business environment involves the integration of different
systems, such as merging customer data from multiple sources into a single CRM system.
WTX can transform disparate data formats into a standardized format that the CRM system
can use, automating and streamlining the process.
15. How can version control be managed in WTX projects?
Version control in WTX projects can be managed through integration with source control
systems like Git or SVN. This allows for tracking changes, managing versions, and
collaborative development of transformation projects.
Advance-Level Questions
1. How does WTX utilize parallel processing to enhance performance, and what
are the implementation considerations?
WTX supports parallel processing by allowing the division of data streams into multiple
threads, which can be processed simultaneously on multi-core processors. This is particularly
useful for processing large volumes of data efficiently. Implementation considerations
include ensuring thread safety, managing synchronization issues, and deciding on the
optimal number of threads that balances performance while avoiding resource contention.
2. Explain the detailed process of creating custom functions in WTX and their
practical applications.
Custom functions in WTX are created using the WTX API, which allows developers to extend
the transformation capabilities of WTX by writing functions in Java or C++. These functions
can then be called within transformation maps to perform operations that are not supported
natively by WTX. Practical applications include complex mathematical calculations,
interaction with external systems, and data manipulations that require custom logic.
3. Discuss the comprehensive integration capabilities of WTX with IBM Integration
Bus (IIB) and the advantages of such integration.
WTX integrates seamlessly with IBM Integration Bus, allowing for robust and scalable data
transformation solutions within IIB flows. This integration enables direct invocation of WTX
maps from within an IIB message flow, facilitating the transformation of message data as it
passes through the bus. The advantages include enhanced performance, simplified
architecture, and the ability to leverage the rich routing and processing capabilities of IIB
alongside the transformation power of WTX.
4. Describe the use of WTX in a real-time data replication scenario.
In real-time data replication scenarios, WTX can be configured to listen for data changes in
source systems and apply those changes immediately to target systems. This involves
setting up event-driven transformations that trigger on data updates, deletes, or inserts.
WTX processes these events to transform and replicate data accurately and with minimal
latency, ensuring that target systems are synchronized with source updates in near real-
time.
5. How can WTX be configured for high availability and disaster recovery?
High availability and disaster recovery in WTX are achieved through clustering and
replication strategies. WTX transformations can be deployed on clusters of servers that
distribute the load and provide failover capabilities. For disaster recovery, WTX supports
data replication across geographically distributed sites and can quickly switch to a backup
site in case of a primary site failure.
6. Explain the concept of "Type Designer" in WTX and its role in complex
transformations.
The Type Designer in WTX is a tool used to create and manage type trees and maps. It
allows developers to define the structure of data as it will appear before and after
transformation. This is crucial for complex transformations as it provides a visual interface to
manage intricate data relationships and ensures that all data elements are accounted for
and correctly mapped.
7. What are the best practices for optimizing map performance in WTX?
Best practices for optimizing map performance in WTX include minimizing the use of
external calls, using lookups efficiently, reusing maps and components where possible, and
tuning the execution environment. Additionally, developers should regularly profile and
monitor map performance to identify and address any bottlenecks.
8. Discuss the challenges and solutions for handling nested data structures in
WTX transformations.
Handling nested data structures in WTX can be challenging due to the complexity of
mapping deeply nested elements. Solutions include using recursive maps for repeated
nested structures, employing functions to handle nested iterations, and carefully designing
type trees to accurately represent the data. Proper testing is critical to ensure that all levels
of nesting are correctly processed.
9. How does WTX support error handling and exception management in complex
integration environments?
WTX supports comprehensive error handling and exception management through its try-
catch-finally blocks and error handling functions. In complex integration environments, these
features allow WTX to gracefully handle errors by logging, redirecting faulty data to error
queues, or executing recovery routines, thereby maintaining the integrity of the data flow
and ensuring system resilience.
10. Can you describe the process of deploying WTX maps in a distributed
computing environment?
Deploying WTX maps in a distributed computing environment involves compiling the maps
into executable formats and distributing them across multiple servers. This distribution can
be managed through the Command Server or external automation tools. Considerations
include ensuring that all environmental dependencies are met and that the maps are
optimized for the specific characteristics of the distributed environment.
11. What are the implications of using WTX in a cloud-native environment, and
how can it be optimized for such deployments?
Using WTX in a cloud-native environment offers scalability, flexibility, and cost-efficiency.
However, it requires optimization for cloud infrastructure, such as configuring stateless
transformations, utilizing cloud storage, and integrating with other cloud services. Security
in cloud deployments also needs special attention, including the use of encrypted
connections and secure access methods.
12. Explain the detailed process and benefits of using WTX with a Service
Oriented Architecture (SOA).
In a Service Oriented Architecture, WTX acts as a middleware component that facilitates
data transformations required by different SOA services. The integration involves exposing
WTX maps as services, which can be invoked by other SOA components. Benefits include
reusability of transformation logic, consistency in data handling across services, and the
ability to handle diverse data formats and standards used by various services within the SOA
landscape.
13. Discuss how WTX can handle version control and change management in a
team environment.
WTX supports version control and change management by integrating with popular source
control systems like Git or Subversion. Teams can manage changes to maps and type trees,
track revisions, and handle branching and merging. Effective change management in WTX
also involves setting up protocols for testing and approval before changes are deployed to
production environments, ensuring that all team members are synchronized and that
deployments are stable.
14. What strategies can be employed to secure data during transformation
processes in WTX?
Securing data in WTX involves several strategies such as encrypting data both in transit and
at rest, using secure protocols for data transmission (like SFTP and HTTPS), and
implementing authentication and authorization controls for accessing transformation
resources. Additionally, sensitive data can be masked or tokenized within transformation
processes to further protect data privacy.
15. How does WTX facilitate the transformation of industry-specific standards like
EDI, HL7, or SWIFT?
WTX offers specialized packs and adapters for transforming industry-specific standards such
as EDI, HL7, and SWIFT. These packs come with pre-built type trees and maps that align with
the standards’ specifications, greatly simplifying the transformation process. WTX enables
customization of these transformations to cater to specific business rules or integration
requirements, ensuring compliance and effective data interchange.
IBM WTX Interview Questions Part I
What is release Character?
A release character is a one-byte character in the data that indicates that the character(s)
following it should be interpreted as data, not as a syntax object. The release character is
not treated as data, but the data that follows it is treated as actual data. Release characters
apply to character data only, not binary data.
If a release character is defined for a type, a release character is inserted for each
occurrence of a syntax object in the data of any item contained in that type.
What are the group subclasses?
Group types have a subclass of Sequence, Choice, or Unordered.
Sequence
A partially-ordered or sequenced group of data objects. Each component of a sequence
group is validated sequentially.
Choice
Choice groups provide the ability to define a selection from a set of components like a
multiple-choice question on a test. Choice groups are similar to partitioned groups. A choice
group is validated as only one of its components. Validation of a choice group is attempted
in the order of the components until a single component is validated. If the Choice group has
an initiator, the initiator is validated first.
Unordered
An unordered group has one or more components. Unordered groups can only have IMPLICIT
format property, with the same syntax options as sequence.

How to capture invalid records?


Using functions like REJECT,CONTAINSERROR, ISERROR

How will you relate Partitioning and Choice?


The components of a choice group are similar to the partitions of a partitioned type.
However, a Choice group can have both items and groups as components. A partitioned
Sequence group can only have group subtypes.

What is functional map?


A functional map is like a sub-routine; it maps a portion of data at a time. A functional is a
map that is used like a function. It takes one or more input objects and generates one output
object. For example, you might have a functional map that maps one Message to one row in
a database table. Or you might have a functional map that maps one Header and one Detail
to one Item Record.

Explain the use of Ellipses?


Use of ellipses option causes the lengthy object names to display the shortest possible
object name. Rather than having the entire a name appear, unique portions of the names
are replaced with a period (.) that is used as an abbreviation of ellipses (…)

What is Trace File?


The trace file is a debugging aid used to diagnose invalid data or incorrect type definitions. A
map can be configured to create a trace file that can be viewed. The trace file is a text file
that records map execution progress. Input data, output data, or both input and output data
may be included in a trace file. Map settings and adapter commands are used to enable
tracing.

What is workspace?
What are the methods to override input/output settings? Using RUN function and IFD
settings. Data sources and targets, and other map settings, can be overridden from the
Integration Flow Designer or when a map is run using the command server or the Platform
API.

What is command server?


The Command Server is used to develop, test, and execute maps in development
environments. It can also be used to execute commands in production environments but a
single map at a time.

What is the purpose of event server?


The Event Server automates the execution of systems of maps and can control multiple
systems. On Windows platforms, the Event Server runs as a multi-threaded service. On UNIX
platforms, the Event Server runs as a multi-threaded daemon.

What are the Prerequisite product versions of ITXA?


Integrating ITXA V9.0 and Sterling B2B Integrator requires the following product versions:
Sterling B2B Integrator V5.2.6.1 or later
ITXA V9.0 or later
ITX V.9 or later. Optional if you are not invoking an ITX map.

What are the benefits of using the ITXA Standard processing engine with sterling
b2b integrator?
Standards Processing Engine is a high-performance, modular translation solution that
supports WebSphere® Transformation Extender and XSLT maps to simplify onboarding,
and delivers one of the best solutions for HIPAA document processing. The integration of
Sterling B2B Integrator and SPE extends these features to Sterling B2B Integrator users.
What is the Consistency of operation between Sterling B2B Integrator and SPE?
When you use SPE, you maintain familiar Sterling B2B Integrator operation:
You continue to manage Sterling B2B Integrator users from Sterling B2B Integrator. User
authentication is done against Sterling B2B Integrator. Sterling B2B Integrator users are
recognized by SPE and do not need to be recreated.
Sterling B2B Integrator features and operation (for example, reports, and correlation data
from Sterling B2B Integrator processing) are unchanged.
Sterling B2B Integrator business process information is protected during the SPE import
process for use with custom drivers.

How to install the ITXA for sterling integrator?


The procedure for installing ITXA for Sterling B2B Integrator depends upon your current
system and whether you want to include design components with the runtime installation.
How to use a different database for ITXA and Sterling Integrator?
If using different databases for ITXA and Sterling B2B Integrator, install the JAR files for
the ITXA database and add them to the Sterling B2B Integrator classpath.
To install a third-party JAR file:
Copy the .jar file to a directory on the host computer where Sterling B2B Integrator is
installed. Record the path and directory name.
Run the install3rdParty script located in the bin directory for your installation. For
command parameters, type (UNIX or Linux) ./install3rdParty.sh or (Windows) .\
install3rdParty.cmd at the Sterling B2B Integrator command line.

How to configure the ITXA and IBM Control Center?


Integrate your IBM® Transformation Extender Advanced installation with IBM Control
Center to monitor the status of your ITXA servers and messages using IBM Control Center.
Prerequisites
Create the IBM Control Center database.
Install the following product versions:
IBM Control Center V6.1.0.1 iFix 01 or later
For the hostname, use the FQDN of the machine name that will host the IBM Control
Center.
ITXA V9.0.0.3 or later
If using Sterling B2B Integrator, V5.2.6.2 or later.
How to send events to the control center from itxa?
To send events to IBM Control Center, you set some ITXA properties to configure the
following:
Classes that send ITXA event and system status heartbeats to IBM Control Center.
Login information for the event repository. You set these properties to point to the IBM
Control Center instance.
Other information is used by the event and system status classes, such as the default
instance name to use, the locale, the heartbeat frequency, and a directory for the
temporary storage of the events that are queued to send to IBM Control Center.

What are the types of maps that ITXA can run?


WebSphere Transformation Extender maps also can run other types of maps (such as
Sterling B2B Integrator maps) on Standards Processing Engine by using the SPE adapter -
TRANSFORM command.
What is spe?
SPE stands for Standards Processing Engine. Standards Processing Engine is a high-
performance, modular translation solution that supports WebSphere® Transformation
Extender and XSLT maps to simplify onboarding, and delivers one of the best solutions for
HIPAA document processing.
What is meant by ITX Design Studio?
ITX Design Studio can be installed (Windows only) on the same machine or on a separate
machine from your runtime server. If you use a separate machine for ITX Design Studio,
install ITXA on that machine, and deploy your maps and associated files to your runtime
server.
1. What is a functional map and how is it used in WTX?
A functional map in WTX is a reusable component that defines a specific transformation
logic. It can be invoked from other maps to perform common transformation tasks,
promoting modularity and reuse in project development.
2. How does WTX handle large data volumes?
WTX is capable of handling large data volumes through streaming and partitioning
techniques. It processes data in chunks, reducing memory overhead and improving
performance in large-scale transformation scenarios.
3. Can WTX be used for real-time data processing?
Yes, WTX can be configured for real-time data processing by deploying it in environments
that support real-time data capture and transformation, such as message-oriented
middleware. This enables WTX to process data as it flows between systems without
significant delay.
4. What debugging tools are available in WTX?
WTX offers several debugging tools, including the ability to step through maps in the Design
Studio, watch variable values, and log execution details. These tools help developers identify
and fix issues within the transformation logic.
5. How does WTX handle complex data transformations?
WTX handles complex data transformations using a combination of functional maps, custom
functions, and external calls to databases or APIs. It supports complex conditional logic,
looping, and data aggregation to meet diverse transformation requirements.
6. What are lookup tables and how are they used in WTX?
Lookup tables in WTX are used to store key-value pairs that can be accessed during
transformations to enrich or validate data. They are particularly useful for replacing codes
with meaningful descriptions or for validation against predefined lists.
7. Explain the role of the Command Server in WTX.
The Command Server in WTX manages the execution of maps from remote locations. It
allows users to run maps via command-line instructions, making it easier to integrate WTX
transformations into automated workflows and batch processes.
8. How is data validation performed in WTX?
Data validation in WTX is performed using type trees that define data formats and
constraints. Maps can include validation logic to check data accuracy and completeness
before proceeding with transformations.
9. What is the Integration Flow Designer and how is it used?
The Integration Flow Designer in WTX is a graphical tool used to design and deploy
integration solutions that involve complex data transformations. It helps in visually
orchestrating how data moves and transforms across different systems and applications.
10.. How can external applications trigger transformations in WTX?
External applications can trigger transformations in WTX through APIs, message queues, or
direct database operations. WTX provides adapters and connectors that facilitate these
interactions, allowing seamless integration with a variety of external systems.
11. Discuss the scalability options for WTX.
WTX scales by deploying on multiple servers or in clustered environments. It can handle
increased loads by distributing transformations across multiple instances, which can be
dynamically adjusted based on the workload.
12. What are the best practices for maintaining WTX maps?
Best practices for maintaining WTX maps include regular updates to type trees and maps,
thorough testing, consistent documentation, and adherence to coding standards. Regular
reviews and optimizations of maps ensure that transformations remain efficient and error-
free.
13. Can WTX be used in cloud environments?
Yes, WTX can be deployed in cloud environments, taking advantage of cloud infrastructure
for flexibility, scalability, and cost-efficiency. Cloud deployments can enhance the
accessibility and reliability of WTX transformations.
14. What support is available for WTX users?
Support for WTX users includes documentation, community forums, and direct support from
IBM through technical support contracts. IBM also offers training and certification programs
to help users maximize their use of WTX.
15. How does WTX handle internationalization and localization?
WTX supports internationalization and localization by handling multiple character sets and
time zone conversions. It ensures that data is appropriately formatted and transformed
according to local standards and practices, which is crucial for global applications.
IBM WTX Interview Questions Answers - For Advanced
1. Describe the audit and logging capabilities in WTX and how they can be
leveraged for compliance and monitoring.
WTX provides extensive audit and logging capabilities that record detailed information about
data transformations, including execution times, outcomes, and error details. These logs can
be used for troubleshooting, performance monitoring, and compliance auditing. WTX logs
can be integrated with enterprise monitoring tools to provide real-time insights into the
health and performance of the data transformation processes.
2. What are the implications of integrating WTX with big data platforms, and how
can it be achieved?
Integrating WTX with big data platforms involves using WTX’s ability to handle large
volumes of diverse data formats to preprocess or transform data before it is loaded into big
data systems. This can be achieved through direct connections to big data platforms or
through batch processes that prepare data for analytics. The implications include enhanced
data quality and accessibility for big data analytics applications, enabling more accurate and
comprehensive analyses.
3. How can WTX contribute to IoT (Internet of Things) data management
strategies?
WTX can play a significant role in IoT data management by transforming data generated
from IoT devices into formats suitable for analysis and storage. WTX can process this data in
real-time or batch modes, apply necessary transformations for standardization and
aggregation, and integrate it with enterprise systems or analytics platforms. This capability
is crucial for leveraging IoT data in operational processes and decision-making.
4. Discuss the role of WTX in a multi-cloud environment and its benefits and
challenges.
In a multi-cloud environment, WTX can facilitate data integration and transformation across
different cloud platforms. This flexibility helps organizations avoid vendor lock-in and
optimize costs by using the best-suited cloud services. However, the challenges include
managing the complexity of multiple cloud APIs, ensuring consistent data security across
clouds, and handling potential data transfer delays or costs.
5. Explain how advanced data mapping techniques can be applied in WTX to solve
complex integration scenarios.
Advanced data mapping techniques in WTX, such as using recursive maps, dynamic
mapping, and external function calls, enable developers to address complex integration
scenarios. These techniques allow for the transformation of highly complex or variable data
structures and the integration of logic that adapts to data context. Solving these scenarios
often involves a deep understanding of both the source and target systems and the ability to
implement efficient and maintainable transformation logic.
6. What are the considerations for WTX deployment in hybrid IT environments?
In hybrid IT environments, where resources are distributed across on-premises and cloud
platforms, WTX deployment must consider network connectivity, data security, and
consistent configuration management. Ensuring seamless data flow between on-premises
and cloud components requires robust networking solutions, often involving VPNs or
dedicated links. Security measures must be uniform across all environments, including
consistent encryption practices and unified access controls. Additionally, configuration
management tools should be used to maintain consistent deployments and updates across
different platforms.
7. How can WTX be utilized for data governance and compliance initiatives?
WTX plays a crucial role in data governance and compliance by ensuring that data
transformations adhere to relevant standards and regulations. It can enforce data integrity,
accuracy, and consistency through validation rules and transformations that standardize
data. Audit trails generated by WTX can also support compliance reporting and analysis,
providing detailed insights into data handling processes and compliance with data protection
regulations like GDPR or HIPAA.
8. Discuss the performance tuning and optimization strategies specific to WTX in
processing extremely large datasets.
For extremely large datasets, WTX performance can be optimized by employing parallel
processing, optimizing type trees and maps for minimal resource consumption, and using
efficient data formats and protocols. Implementing streaming data techniques to handle
data in chunks rather than loading entire datasets into memory can significantly reduce the
performance overhead. Additionally, leveraging in-memory processing where applicable can
speed up transformation times. Regular profiling and benchmarking are also critical to
identify performance bottlenecks and address them systematically.
9. How does WTX handle dynamic data sources and schema changes?
WTX can handle dynamic data sources and schema changes by using flexible type trees that
accommodate variations in data structures. Maps can be designed to dynamically adjust to
the data they process, using conditional logic and lookup functions to adapt to changes in
the schema. This flexibility ensures that WTX transformations remain robust and effective
even as the underlying data sources evolve.
10. Explain the mechanisms WTX provides for data cleansing and enhancement
during transformations.
WTX offers a variety of mechanisms for data cleansing and enhancement, including data
validation functions, transformation operators that standardize and correct data, and
integration with external data quality tools. These capabilities allow WTX to remove
inconsistencies, apply business rules to ensure data quality and enhance data with
additional information from external sources, thus improving the overall value and usability
of the data.
11. What advanced debugging and testing techniques are available in WTX for
complex mapping scenarios?
For advanced debugging and testing in WTX, developers can use interactive debug sessions
that allow stepping through transformations step-by-step, examining intermediate data
values and computational states. Automated testing frameworks can also be integrated with
WTX to run complex test cases and scenarios, ensuring that maps behave as expected
under various conditions. Additionally, simulation tools within WTX allow the testing of maps
with hypothetical data, facilitating thorough testing without the need for live data feeds.
12. How can WTX be integrated with artificial intelligence and machine learning
models?
WTX can be integrated with AI and machine learning models by preprocessing data in
formats suitable for these models and then handling the output of the models for further use
in business processes. This integration typically involves transforming data into a clean,
structured format that machine learning models can consume, and then taking the
predictive or analytical results from these models to use in decision-making processes,
further enriching the business insights provided by WTX transformations.
13. Discuss the impact of containerization on WTX deployment and management.
Containerization impacts WTX deployment and management by enabling more agile,
scalable, and consistent delivery of transformation services. Containers encapsulate WTX
environments, ensuring they run uniformly regardless of the underlying infrastructure. This
facilitates easier deployment, scaling, and management across diverse environments,
including cloud platforms. Additionally, container orchestration tools like Kubernetes can
manage WTX containers, automating deployment, scaling, and recovery processes.
14. What are the implications and considerations of implementing WTX in a
serverless computing environment?
Implementing WTX in a serverless computing environment involves understanding the
event-driven, stateless nature of serverless architectures. WTX must be adapted to handle
short-lived, dynamic transformations that respond to events such as file uploads or data
streams. Considerations include managing cold starts, optimizing execution times to fit
within the limits imposed by serverless platforms, and ensuring cost-efficiency given the
pay-as-you-go pricing model.
15. How does WTX support business activity monitoring and operational
intelligence?
WTX supports business activity monitoring and operational intelligence by providing detailed
logs and metrics on transformation processes. These logs can be integrated with business
monitoring tools to track key performance indicators, operational efficiencies, and
transactional integrity. Real-time analytics can be applied to this data to derive insights into
business operations, enabling proactive management and decision-making based on current
data flows and transformations.

Mercator WTX 8.1 interview questions


 EDI
Is the X12 file positional delimited or literal delimited?
Both positional and literal delimited… ISA record positional delimited (total 106 characters) and remaining
* or / etc delimited

What is EDI? Electronic Date Interchange. EDI is simply sending and receiving of information using
computer technology. Its efficiency has made it a condition of doing business in dozens of industries
(retail, grocery etc.,) .Any standard business document that one company would exchange with another
(like PO ,invoice ,Health Care claim) can be exchanged via EDI between the two parties or trading
partners, as long as both have made the preparations. EDI is transmitted in a structured format, based on
the use of message standards, which ensures that all participants use a common language.
What are the uses of EDI? 1) Manage Huge volumes of transactions
2) Less Operating Cost
3) Eliminates delays
4) Eliminates data entry errors
5) Bridges the information gap that exists between companies using different computer systems.
6) Elimination of paper documents
7) Greater accuracy of information
8) Better tracking
9) Speed. Because information is moved faster and with greater accuracy, time spent communicating with
the suppliers decreased.

What are the various formats/flavors available in EDI? ANSI-X12 à American National Standards Institute,
EDIFACT à EDI For Administration, Commerce and Transport ,
XML à Extended Markup Language, CSV à Comma Separated Value

What is EAI? EAI is an industry term used to describe the infrastructure needed to facilitate disparate
applications communicating together. With EAI all the applications are communicating via a central
system, or middleware. No specialized programs at either the source or destination location perform this
data translation from one format to another. It is the responsibility of the EAI system to provide a service
to change the data formats between the two applications.

What is an inbound file?


Inbound file is an EDI file received from a trading partner and its data gets parsed or mapped to an
existing system (Legacy system)
What is VAN? Communication between trading partners is usually handled by a carrier. The carrier, also
known as a third-party network or Value Added Network (VAN), acts like a postal service between trading
partners, who are using standard communication protocols.
What is an EDI Translator? For the bookstore system to have a consistent EDI interface, an EDI
translator is essential. The EDI translator normalizes the EDI documents going to and from the bookstore
system to the trading partners. EDI translators are available from a number of independent companies
who not only provide the translator software, but also provided updated dictionaries, as new revisions to
the standards become available.
What is an outbound file? Outbound file is an EDI file generated data out of an existing (Legacy) system
sent to a trading partner.

Mercator – general
What is the main strength of Mercator? Partitioning is the main strength of Mercator.

What are mrc and mrn files? Where will you use them?
Mrc - This is the filename extension for a resource configuration file. A resource configuration file contains
specifications for an engine such as the active virtual server(s) and its associated .mrn file. Mrn - This is
the filename extension for a resource name file. A resource name file contains a named set of virtual
servers and a named set of resources. Each named resource specifies a value for that resource for each
virtual server.

What is resource register?


The Resource Registry is an application that is used to define name aliases for source and target
resources that are specified in map cards.

What is “ini” file?


The dstx.ini file is the configuration file for Ascential DataStage™ TX and contains specific settings for the
Event Server. The file is created in the root directory during the installation process. You can use a text
editor to edit option values as needed. The dstx.ini file contains default initialization settings for the Event
Server under the following headings:
¨ Launcher
¨ Resource Manager
¨ Connections Manager

What is Process Control File?


Process Control File has the information which is supplied to the Command Server that controls the
processing of maps. The Integration Flow Designer can generate command files that control Command
Server processing and event server files that control Event Server processing.
Type Designer
What is $=$ in component rule?
$=$ è When the particular component is present the validation should proceed without throwing any
warnings/errors.

How can you incorporate validation in a type tree?


By using component rule validation can be incorporated in a type tree.
What is component Rule? Is it advisable to use Component Rule?
A component Rule is an expression about one or more components. It indicates what must be true for
that component to be valid. For given data, it evaluates to either TRUE or FALSE.A component rule is
similar to a test, it is invalid. A component rule cannot be greater than 32K. No it is not advisable to use
Component rule.

What is release Character?


A release character is a one-byte character in the data that indicates that the character(s) following it
should be interpreted as data, not as a syntax object. The release character is not treated as data, but the
data that follows it is treated as actual data. Release characters apply to character data only, not binary
data.
If a release character is defined for a type, a release character is inserted for each occurrence of a syntax
object in the data of any item contained in that type.

The following options are available for the Release property.

Value Description
None Release characters are not enabled.
Literal The release character is a constant value defined by the Release > Value property. Expand the
Release property to define the literal release value.
Variable The release character is a variable value defined by the Release > Default, Item and Find
properties. Expand the Release property to define the variable release values.

What is sized attribute? The sized attribute specifies that the value of the given component represents the
size of the following component. The sized attribute is used on a component whose value specifies the
size (in bytes) of the component immediately following it. The sized attribute can be used on more than
one component of a group.

Size
For example, you may have a variable length component with a number immediately preceding it that
indicates the length of the component:
10Washington -- The 10 indicates the size of the following component.

Some important points about using the sized attribute are:

¨ the component with the sized attribute must be defined as an unsigned integer.

¨ If a binary byte stream item does not have a fixed size, the component preceding it must specify its size
and the sized attribute must be used on that component.

The size of a component is the number of bytes from the beginning of that component, up to and
including the end of the component. If a component has a series range [such as (1:3)], the size includes
all of the members in the series of that component. If a delimiter separates each member of that series,
the delimiters must be included in the size. Also, if release characters appear in the component, they
must be included in the size.
The size does not include delimiters that separate one component type from the next.

What is Restart attribute? The restart attribute specifies an error recovery point. In order to map invalid
data of a particular object, errors during validation are ignored by assigning the restart attribute. Then you
can map the invalid data using any or all of the error functions REJECT, ISERROR, and
CONTAINSERRORS. To continue processing your input data when a data object of a component is
invalid, assign the restart attribute to that component.
Note Do not put the restart attribute on a required component. There must be a sufficient number of valid
instances to cover all the required components. If you have a required component, that is not valid, the
restart attribute will not validate the data.

What is identifier? When creating type trees and defining components of a group, the identifier attribute
can be assigned to one component to identify a collection of components that is used during data
validation to determine whether a data object exists. The identifier attribute can be used on a component
of a group. The identifier indicates the components that can be used to identify the type to which a data
object belongs. All the components, from the first, up to and including the component with the identifier
attribute, are used for type identification.
When this data is validated, it knows that, when it reaches the identifier, it has found a specific group.
That group, therefore, is known to exist, even if part of the group following the identifier is missing.

What is a fixed and delimited property?


The size of the element will be fixed in fixed. A fixed group can have only fixed components. If the size of
the field is 10 and actual size of element is 5, 5 spaces will be padded to the output. In fixed group
components are identified by their position. In delimited group, components are identified by delimiter.

Restrictions in type tree? Exclude and include

What are the different types of analysis? Logical and structural analysis.
Logical Analysis
Logical analysis addresses the integrity of the relationships that you define. Logical analysis detects, for
example, undefined components, components that are not distinguishable from one another, item
restrictions that do not match the properties of that item, and circular type definitions. The analyzer also
checks delimiter relationships, and logic errors contained in component rules.
Structural Analysis
Structural analysis addresses the integrity of the underlying database. Structural analysis may be able to
detect and possibly correct defects caused by system environment failures. Typically, you should not
encounter structural analysis error.

What are the group subclasses?


Group types have a subclass of Sequence, Choice, or Unordered.
Sequence
A partially-ordered or sequenced group of data objects. Each component of a sequence group is
validated sequentially.

Choice
Choice groups provide the ability to define a selection from a set of components like a multiple-choice
question on a test. Choice groups are similar to partitioned groups. A choice group is validated as only
one of its components. Validation of a choice group is attempted in the order of the components until a
single component is validated. If the Choice group has an initiator, the initiator is validated first.

Unordered An unordered group has one or more components. Unordered groups can only have IMPLICIT
format property, with the same syntax options as sequence.

What is partitioned property?


If the data of a type can be divided into mutually exclusive subtypes, it can be partitioned.
What is explicit and implicit?
A Sequence group has either an Explicit or Implicit format. For example, if each component of a fixed
group has a fixed size, the component is distinguished from the next component by its position in the
data. Or, a group may have delimiters that appear for missing components. In these cases, the format is
apparent; the group has an explicit format.
If a group does not have an explicit format, it has an implicit format. An implicit format relies on the
properties of the component types. In this example, the components make some pattern in the data and it
is possible to distinguish between them, but the format is not fixed and if delimiters separate components,
they do not appear for missing components.
When deciding what format a group has, it may help to ask first whether it is clear where one component
ends and another begins. Generally, a group has an explicit format if the position of each component in
the data stream is always the same or if a delimiter always marks the place for each component.

What is partitioning?
Partitioning is a method of subdividing objects into mutually exclusive subtypes.
Map Designer
For a functional map A and B are parameters. A occurs 2 times and B occurs 3 times. How many times
the functional map executes?
2 Times...if particular occurrence of a parameter to a functional map is null the functional map will not
execute that time

What is LOOKUP function? The LOOKUP function sequentially searches a series, returning the first
member of the series that meets a specified condition.

Syntax
LOOKUP (series-object-expression, single-condition-expression)

Meaning
LOOKUP (series_to_search, condition_to_evaluate)

Returns
This function returns a single-object. Returns the first member of series_to_search for which
condition_to_evaluate evaluates to TRUE. Returns NONE, if no member of series_to_search meets the
condition specified by condition_to_evaluate.

Uses
Use LOOKUP to find an occurrence of an object that meets a certain condition. LOOKUP performs a
sequential search over series_to_search. Use LOOKUP if series_to_search is not ordered.

What is EXTRACT function? The EXTRACT function returns each member of a series for which a
specified condition is true.

Syntax
EXTRACT (series-object-expression, single-condition-expression)

Meaning
EXTRACT (series_to_search, condition_to_evaluate)

Returns
This function returns a series-object. The result is each member of series_to_search for which the
condition specified by condition_to_evaluate evaluates to TRUE. EXTRACT returns NONE, if no member
of series_to_search has a corresponding condition_to_evaluate that evaluates to TRUE.

Uses
Use EXTRACT whenever you need only particular members of a series returned—those that meet a
certain condition.

Note The EXTRACT function can only be used in a map rule. It cannot be used in a component rule.

What is RUN function? The RUN function allows you to execute another compiled map from a component
or map rule.

Syntax
RUN (single-text-expression [ , single-text-expression ] )

Meaning
RUN (map_to_run [ , command_option_list ] )

The first argument, map_to_run, is an expression identifying the name of the compiled map (.mmc) to be
run.Command_option_list is used to specify execution commands applicable to the map to be run.
Command_option_list is a text item containing a series of execution commands separated by a space.
Any execution command can be used as part of the command_option_list. For example, you can send
another map data by using the echo command option (-IEx). See the Execution Commands Reference
Guide for a list of command options.

Note The command_option_list is optional.

Returns
This function returns a single-text-item. The result of the RUN function depends on the command options
in command_option_list:

§ If you use the echo command option for an output card, the data from that card will be passed back as a
text-item to the object in the map from which it was run.

§ If you use the echo command option for more than one output card, the data from all echoed cards will
be concatenated together and passed back as a text-item to the object in the map from which it was run.

§ If you do not use the echo option, the return code indicating the status of the map that was run will be
passed back to the object in the map from which it was run.
Please refer to the attached document to know about the various options

What is CLONE function? The CLONE function creates a specified number of copies of some object.

Syntax
CLONE (single-object-name, single-integer-expression)

Meaning
CLONE (object_to_copy, number_of_copies)

Returns
This function returns a series-object. The CLONE function returns a series of the object specified by
object_to_copy. The output series consists of as many copies of the object as specified by
number_of_copies. The value of each member of the resulting output series is the same as
object_to_copy.
Uses
The CLONE function is useful when the number of output objects to be built depends on a data value,
rather than the number of objects that exist in the data.

What is CHOOSE function? The CHOOSE function returns the object within a series whose position in
the series corresponds to a specified number.
Syntax
CHOOSE (series-object-name, single-integer-expression)
Meaning
CHOOSE (from_these_objects, pick_the_nth_one)
Returns
This function returns a single-object. Produces a single-object whose index is within the
from_these_objects series matches the number specified by pick_the_nth_one. If that member of the
series does not exist, CHOOSE returns NONE.
Uses
Use CHOOSE when you need to use a variable value to specify the index for a particular object from a
series.

What is COUNT?
The COUNT function returns an integer representing the number of valid input objects in a series.
Syntax
COUNT (series-object-expression)
Meaning
COUNT (valid_objects_to_count)
Returns
This function returns a single-integer.The result is the number of valid_objects_to_count. If the input
argument evaluates to NONE, COUNT returns 0.

Note COUNT does not count existing NONEs unless its group was defined as an explicit format with a
Track setting of Places.

Uses
Use COUNT when you need to count a series of input or output objects.

What is INDEX function? The INDEX function returns an integer that represents the index of an object
relative to its nearest contained object, counting only valid objects.
Syntax
INDEX (single-object-name)
Meaning
INDEX (object_for_which_to_get_index)
Returns
This function returns a single-integer. The result is the index of object_for_which_to_get_index.
§ If object_for_which_to_get_index is an input, this will be the index within all valid objects.

§ If object_for_which_to_get_index is an output, this will be the index within all objects (valid and invalid).

Returns 0 if the input argument is NONE.

Note The difference between INDEXABS and INDEX is that INDEXABS counts both valid and invalid
instances, whereas INDEX counts only valid instances.

Uses
Use INDEX when you need to select or test particular objects based on their occurrence, as in the
example above. Or, use index to add a sequence number to output objects.

Note INDEX cannot be used in a component rule.

What is the difference between COUNT () and COUNTABS ()?


The COUNTABS function returns an integer representing the number of input objects in a series. Unlike
COUNT, COUNTABS includes both valid and invalid objects in a series.
What is PUT / GET function? The PUT function passes data to a target adapter. The GET function returns
data from a specified resource adapter.

What is a SEARCHDOWN function? The SEARCHDOWN function performs a binary search on a series
sorted in ASCII descending order, returning a related object that corresponds to the item found.

Syntax
SEARCHDOWN (series-object-expression, series-item-object-expression, single-item-expression)

Meaning
SEARCHDOWN (corresponding_object_to_return, descending_items_to_search, item_to_match)

Returns
This function returns a single-object. Performs a binary search on the item series of
descending_items_to_search. The descending_items_to_search must be sorted in ASCII descending
order. The value to search for is specified as the item_to_match. The object returned
(corresponding_object_to_return) must be related to descending_items_to_search by a common object
name.
If no match is found, SEARCHDOWN returns NONE.

Uses
Use SEARCHDOWN when data is sorted in ASCII descending order and you need to look up data within
the sorted data.

What is the difference between PACKAGE and TEXT function? PACKAGE when you need to convert an
object to a text item that includes its initiator, terminator and any delimiters it contains. PACKAGE differs
from TEXT in that it includes the initiator and terminator of the input object.

What is the difference between LOOKUP and EXTRACT function? LOOKUP differs from EXTRACT in
that LOOKUP returns the first member of series_to_search that meets the condition_to_evaluate, while
EXTRACT returns all members (one at a time) of series_to_search that meet the condition_to_evaluate.

How to find difference between to dates (dynamic values)? Using the DATETONUMBER () function. This
function returns an integer that results from counting the number of days since December 31,1864 to the
specified date.
Different fetching modes of a map. How they differ? Burst and integral mode BURST MODE
Burst mode allows data to be returned from an input or passed to an output in pieces (or bursts) rather
than in a single large buffer of data. This is valuable if you are dealing with large amounts of data and it
would not be feasible to pass all of the data in one retrieval due to memory limitations.
When a map executes in burst mode, it processes all of the inputs, then the outputs, then revisits the
inputs to determine whether they have any more data. This processing continues until all inputs in burst
mode have exhausted their data.

Within a map, some input cards may be in burst mode while others are in integral mode, which means
that they are executed only once. To specify a database card to run in burst mode, change the
SourceRule > FetchAs setting to Burst and specify the FetchUnit setting. The FetchUnit setting is the
maximum number of rows that will be retrieved per fetch. The default is S which indicates that all of the
rows are retrieved by the SELECT statement.

Note If the SourceRule > FetchAs setting for an input card is set to Burst, its Transaction > Scope setting
will always be Map.
INTEGRAL MODE

When messaging adapters are specified as the source of the data for an input card using integral mode
(SourceRule > FetchAs = Integral), the adapter retrieves up to the number of messages specified with the
Quantity adapter command (-QTY) in a single burst.
Different error messages that a map will display? 1) One or more inputs invalid
2) Input valid but unknown data found
3) One or more outputs invalid
4) Source not available
5) Input type contains error
6) Open audit failure
7) Fail function aborted map

How to capture invalid records? Using functions like REJECT,CONTAINSERROR, ISERROR

How will you relate Partitioning and Choice?


The components of a choice group are similar to the partitions of a partitioned type. However, a Choice
group can have both items and groups as components. A partitioned Sequence group can only have
group subtypes.

Which one is efficient and optimized - !create or sink? , Justify your answer.
!create is efficient when compared with sink

!create – Regardless of whether or not data content is produced, output data is not to be sent to its target.
This option is available for temporary data storage as a map runs.
Sink – This adapter is used as a temporary data destination for an output map card which then discards
the mapped data. This capability is useful when a temporary destination is needed to accept output data
as part of the map execution without writing the output to a stationary destination.

What is REJECT ()?


The REJECT function returns the content of an object in error as a text item.
Syntax
REJECT (series-object-expression)
Meaning
REJECT (series_to_look_for_bad_objects)
Returns
This function returns a series-text-item. Evaluates to a series of text items consisting of all the input series
members in error.
Uses
Used in conjunction with the restart attribute. For information about the restart attribute, see the Type
Designer Reference Guide and the Map Designer Reference Guide. The REJECT function can be used
only in a map rule. It cannot be used in a component rule.

What is validation map? Validation Map is the map with which the input is validated against type
definition.
How to check map execution time?
What is BUILD command?
The BUILD command on the map menu is used to build the executable map for the selected map. The
Build command analyzes the logical interfaces within map rules. The BUILD command generates the
compiles map file with .mmc extension. Analysis includes checking for missing rules, invalid rules, invalid
card definitions, verifying map references, and verifying the arguments of functions. Analysis also looks
for circular map references - maps that reference one another.
When a map is built, the map and all of the functional maps that are referenced within that map are
analyzed. An executable map has a data source or target specified for each of its cards. Executable maps
are built. Functional maps are not built.

Note Data sources and targets specified for an executable map are defaults built into the compiled map
file.

What is functional map?


A functional map is like a sub-routine; it maps a portion of data at a time. A functional is a map that is
used like a function. It takes one or more input objects and generates one output object. For example, you
might have a functional map that maps one Message to one row in a database table. Or you might have a
functional map that maps one Header and one Detail to one ItemRecord.

Note The results of the functional map are sent directly to the output card. They are not passed back to
the calling rule.

When to use a functional map?


The use of functional maps is very common. Almost every executable map created will use at least one
functional map. To map a group in the input to a different group in the output, use a functional map. For
example, use a functional map to map an input row to an output row when the rows are defined
differently. Or, use a functional map to map from a file containing many input rows, to generate a file of
many output rows with one output row per input row. The first output row would correspond to the first
input row, the second output row corresponds to the second input row, and so on.
A map defines how to generate the output data. One important factor to consider in determining when to
use a functional map is the presence of an output component with a range of more than one. For
example, ranges of (s) or (1:10). The number of this output object to be created is based on the number
of some input object. Another important factor in determining when to use a functional map is when you
want to transform the data - mapping from one or more types to a different type. In the preceding example
of the functional map that maps one Message to one row in a database table, the input row and the
output row are two different types.

Note Use a functional map when the number of a certain output group that you want to create is based on
the occurrences of some input or output data - and the types are different types.

What are the adapters have you come across? Database, File, Sink, MQseries, JMS.
What is the difference between DBQUERY and DBLOOKUP function?
character will be present at the end of output in DBQUERY and DBLOOKUP will not have.

On performance basis which is best component Rule/Map rule? Map rule will be an optimized option

Why ISError function is used in conjunction with restart function? Can ISError be used alone?

Explain the use of Ellipses?


Use of ellipses option causes the lengthy object names to display the shortest possible object name.
Rather than having the entire a name appear, unique portions of the names are replaced with a period (.)
that is used as an abbreviation of ellipses (…)

Ex: Complete Object Name : company field: record: Input.


Shortened Name : company field:.: Input

Explain f (A, B) having 3 occurrences of A and 2 occurrences of B. A and B are independent. How many
times this functional map does is called?
6 times will be called (3C2-Combination of 3 and 2)

Explain f (A, B) having 3 occurrences of A and 2 occurrences of B. A and B are dependent. How many
times this functional map does is called? 2 times map will be called (A,B) (A,B)(A)

3 A’s, 2B’s present in a single file segregate all A’s to one file and all Bs to another? File A => f(A) file =>
f(B)

1000 A’s in one file separate each A to different files with different names? PUT(“FILE”, Name +
(INDEX),PACAKGE(input item))

RUN Function commands used in one function? -IF – Files(specify location) -IE direct data(override) -
ECHOIN – package data path from calling map (make a copy) -HANDLEIN – (direct data) -OF -Summary
–TS -Audit - AU
-Output Sink –OASink

What are the different Adapters?


PFA the document to have an idea about the list of adapters

What is organizer Window?


The Organizer window contains information about the selected map. The information includes unresolved
rules, remarks, audit settings, the trace file, the audit log, and build errors.
The Organizer window has six tabs:

¨ Unresolved Rules
¨ Remarks
¨ Data Audit Settings
¨ Trace
¨ Audit Log
¨ Build Results

Note If you select the Unique filename option for the audit log (Map Audit > Audit Location > Filename),
the audit log will not be viewable from the Organizer window.

Each tab on the Organizer window can be floated as a separate window. The font of each Organizer tab
can be customized.

What is EXIT function?


The EXIT function allows you to interface with a function in an external library or application. Depending
on the execution platform, there are two different methods for the EXIT function.

Syntax
EXIT (single-text-expression, single-text-expression, single-text-expression)

Meaning 1 – Library Method

Returns
This function returns a single-text-item. Set lpep->nReturn equal to 0 if the function is to succeed or set it
equal to 1 to fail. For detailed information on the requirements of the library function that is executed by
the EXIT

Meaning 2 – Program Method

EXIT (program name, command_line_arg1, command_line_arg2)

At execution time, the program specified by program name executes and passes the concatenation of
command_line_arg1 + " " + command_line_arg2 as a text string.Whatever is returned by program name
to the standard output device is returned as text.

Returns
This function returns a single-text-item. Returns a text string from the function or application that is
executed. If the EXIT function is not available for a particular platform, EXIT returns NONE.

Uses
Use EXIT when you need information from an existing function in a library or a program, or when you
need to use a general function that is not available.
What is FAIL function? The FAIL function returns NONE, aborts the map and returns its argument as the
map completion error message.

Syntax
FAIL (single-text-expression)

Meaning
FAIL (message_to_return)

Returns
This function returns NONE. Returns NONE to the output to which the function is assigned, aborts the
map and returns message_to_return as the map completion error message included in the execution
audit. The map return code will be "30", indicating that the map failed through the FAIL function.
Uses
Use the FAIL function to abort the map based on map or application specific logic.

What is Trace File?


The trace file is a debugging aid used to diagnose invalid data or incorrect type definitions. A map can be
configured to create a trace file that can be viewed.
The trace file is a text file that records map execution progress. Input data, output data, or both input and
output data may be included in a trace file. Map settings and adapter commands are used to enable
tracing.
When input data is traced, the trace file provides a step-by-step account of the data objects found, why
the data is found to be invalid, sizes and counts of data objects and their position in the data stream.

A trace file of input messages contains a message for each input data object. Each input data object
message describes the:

¨ level of the object in the card


¨ offset of the data object in the data stream
¨ length of the data object
¨ component number of the data object
¨ index of the component
¨ a portion of the actual data
¨ name of the type it presumably belongs to

A trace file of output messages specifies which objects are built and which output objects evaluate to
NONE.

Note Because performance can be impacted, it is best to use the trace option only during debugging.

30 maps are there in a System. How to find how many executed completely? By looking into event server
log

What is Pseudo Maps?


Pseudo maps reference executable maps that have not yet been implemented. Pseudo maps are
represented by PINK icon.

How to convert pseudo maps to executable map?


Pseudo maps are the only components in the Integration Flow Designer (IFD) that allow you to create
source maps and add input/output cards.

To create a source map from a pseudo map

1. Right-click the pseudo map icon and select Create Source Map from the context menu.
The Create Source Map dialog appears.

If the map source file exists, click Browse to display the current list of executable maps in this map source
file. If the specified map source file does not exist, or if no map source file is specified, an error message
is displayed.
You can sort the cards by number or by name. A card is undefined if any of the following are not
specified:

¨ type tree
¨ type name
¨ source or target

2. Click the Edit button to provide the missing information.


The Edit Input Card dialog or Edit Output Card dialog appears, depending on the card currently
selected.After all the missing information is supplied for a card, ok is displayed next to the card name. The
Create button becomes active when all cards are defined and when the executable map name and map
source file are defined and there is at least one output card.

3. Click Create
A map source file is created or the executable map is added to an existing map source file. If the directory
for the desired map source file does not exist, then it is created. If creating the source map is successful,
the pseudo map component is converted to a source map component (the icon color changes from pink
to blue). You can now perform any action that is valid for a source map component - for example, opening
the Map Designer and defining rules for the map.

How to build maps in different environment?


To run a map on another platform, you must build the map for that platform, using the Build for Specific
Platform command. To run a map on another platform, that platform must have the Command Server
installed. To identify which map is built for a specific platform, maps compiled for specific platforms are
compiled with platform-specific file name extensions. The name of the platform-specific compiled map file
is the executable map name with the platform-specific file name extension.

For example, building the map MyMap for the MVS platform compiles the MyMap.mvs map. The platform-
specific file name extensions prevent you from inadvertently overwriting your original compiled map and
help identify which compiled map file should be transferred to the specific platform environment.

Note Creating a folder exclusively for platform-specific maps is strongly recommended.

The Build for Specific Platform command creates a compiled map file in the format required for a given
platform, which accounts for byte-order and character set differences on that platform.

To build a map for a specific platform

1. Select the map you want to build.

2. From the Map menu, choose Build for Specific Platform. The Select Platform dialog box appears.

3. For the Platform, select the platform on which you plan to run the map.

The File Name field on the Select Platform dialog box updates with the name of the compiled map that
will be ported to that platform.

4. Click OK.

After the map is built for the specific platform, perform a binary file transfer of compiled map file to the
command server environment. For example, in a UNIX environment, you might use FTP to transfer the
ported map to your UNIX server.
You should always use the Build for Specific Platform command when your target platform has a different
byte order or character set than the one in your map development environment.

What is workspace?

What are the methods to override input/output settings? Using RUN function and IFD settings. Data
sources and targets, and other map settings, can be overridden from the Integration Flow Designer or
when a map is run using the command server or the Platform API.

What is WorkArea?

Versions change – Upgrading?


Whenever you add a subsystem that references an .msd file from a previous version, a message notifies
you that all map source files (.mms) will be updated unless the paths to the existing maps are changed by
using the Find/Replace operation.
Use the Find and Replace commands to change the paths that specify where executable maps are
located on target servers.

Note The text on the Find All button changes to Find when a value is entered in the Find What field.

The Find dialog appears with the name of the active system displayed in the title bar.
You can search for component names, card names, and path names.

How to specify database as parameter of RUN Map?


-ID1

What are the adapters you have come across?


Open to you (like FTP, Database, MQ, and File)

What are the settings you have to do for DATABASE Adapter?


1. From the input or output card(s) in the executable map, select Database as the value for either the
GET > Source or PUT > Target settings.

2. Select the .mdq file containing the database-specific source or target information.

3. Perform the following for an input and/or an output card:

§ In an input card, select a query from the .mdq file.

§ In an output card, specify a table name or stored procedure.

PFA the screen shot to know the basic setting details required for a database adapter.

Do you have any external commit for DATABASE adapters?


Yes we can have external commit

81. During setting override operation what does “-ofb”?


If the map, burst, or card does not complete successfully, roll back any changes made to this data target.
If this option is not specified, the OnFailure setting compiled into the map is used.

82. Is it possible to create a backup with map settings?


Yes it is possible to create Backup for both the input and output cards. You can define the BACKUP
settings for each input or output card needed.
IF (PRESENT (A), A,B) what is the equivalent function? EITHER(A,B)
84. What is EITHER function? The EITHER function returns the result of the first argument that does not
evaluate to NONE.

Syntax
EITHER (single-general-expression { , single-general-expression } )
Meaning EITHER (try this { , if none—try_this } )
Returns
This function returns a single-object. Returns try this, if try this does not evaluate to NONE. Returns
if_none_try_this, if try this evaluates to NONE. Returns the next if_none_try_this, if the first
if_none_try_this evaluates to NONE, and so on.

Uses
Use EITHER when you want a default when an expression evaluates to NONE and the expression may
cause common arguments to produce an unintended result.

85. Can you use LOOKUP in LOOKUP like LOOKUP (LOOKUP ())?
Yes

86. In a functional map if u are using 5 parameters, in that first parameter is EXTRACT () .If no values
gets return for the EXTRACT, how will the functional map behave? The functional map will not be called.

Can we refer to two different databases at a time?


Yes we can refer to two different databases at a time by referring to two different mdq’s which has
different oracle database settings

Frequently used functions in Mercator

Integration Flow Designer and Servers

What is event server?


The Event Server automates the execution of systems of maps and can control multiple systems.

Different triggering events in msd?


Source and Event Triggered

What is command server?


The Command Server is used to develop, test, and execute maps in development environments. It can
also be used to execute commands in production environments but a single map at a time.

Working of event server?


The Event Server runs systems of maps that are created and generated using the Integration Flow
Designer (IFD). These systems of maps that are generated specifically to run in the Event Server are
called Event Server system files (.msl); sometimes referred to as Event Server control files. When the
Event Server starts running, it is initialized with .msl files in the deployment directory.

Stopping and starting of event server?


start_es – will start the corresponding event server that is currently in use.
Stop_es - will stop the event server that is currently under use.

What is the purpose of event server?


The Event Server automates the execution of systems of maps and can control multiple systems. On
Windows platforms, the Event Server runs as a multi-threaded service. On UNIX platforms, the Event
Server runs as a multi-threaded daemon.
What is Event Server Path?
Event Server Path is the path where Event server system files (.msl) are placed.

Is it necessary to stop an event server when new system/changes to a system is deployed?


Yes. Before you create or update an .msl file from the Integration Flow Designer (IFD), stop the Event
Server and Event Server Monitor. If the Event Server is running when you create or update an .msl file,
the change is not recognized and the file does not appear in the Event Server Monitor. You must restart
the Event Server in order for changes to be recognized.

Turning on off Trace file, how does it affect performance? It is recommended to turn off the trace file while
executing in the server, since it takes time and space for generating and storing the trace file.

What is the use of IFD?


Use the IFD whenever you have multiple maps to manage within your enterprise. One click can build or
port as many maps as you have defined in any system.

Note You must use the IFD to generate systems if you plan to use the Event Server. The IFD is the client
definition facility for the Event Server.

You do not need the IFD if you are using a Command Server. However, you will find it to be useful as a
client facility for managing maps that will be run by a Command Server. The IFD generates process
control information for Command Servers in the form of command files. Generating these command files
manually is tedious and error-prone. Using the IFD eliminates possible manual errors.

What are the steps to create a *.msd file?


The main function of the Integration Flow Designer (IFD) is to create system definition files (.msd) that are
generated into Event Server system files (.msl).

To create a new system definition file

1. From the File menu, choose New.


A new system definition file icon named SystemFileX (where X is a sequential number) appears in the
Navigator and a new system window opens with the default name System1.

You are now ready to begin creating the system definition diagram for your new system.

To save a system definition file

1. Select File > Save.


2. Enter a file name for the new system definition file.
3. Click Save.

The new system definition file name is applied and the new name appears in the title bar of the active
system window. The title bar of the system window displays the system name followed by the system
definition file name in parentheses (*.msd). The settings for Server, Execution Mode, and Platform are set
to the defaults until you change them.

To create a system in an existing system definition file

1. In the Navigator, select the system definition file that is to contain the new system.

2. From the System menu, choose New.

The New System dialog appears with a default name of System followed by a sequential number.
3. Accept the default name or enter a unique name for the system, and click OK.

A system window appears with the new system name and the path name of the system definition file
displayed in the title bar. The new system name also appears in the List view of the Navigator.

An msd is time event based. For example, From 10:00AM the map had to be triggered for every five
minutes. A huge file A is present and it starts getting validated by 10:00AM and prolongs for more than 5
min .What will happen? Will the maps stops processing the current file and some more instance of the
map will be triggered such that it takes the new files allowing the file A to get processed in parallel? Either
the next time event (10:10) can be skipped or multiple instance of the map can be triggered to validate the
new files. PFA the screen shot to accomplish the setting for this scenario

99. Is it possible to set time and source event simultaneously? Yes it is possible to set. PFA the screen
shot to accomplish this situation

100. How will you use the event server?


To use the Event Server, you must complete a series of steps. The list below provides a high-level
overview of the steps to follow after installing Ascential DataStage™ TX.

Basic Steps

These steps assume that you have already generated an Event Server system file (
.msl). For help on creating or generating a system file, refer to one of the following:

¨ Creating a System Definition File


¨ Generating System Files for the Event Server

Before you create or update an .msl file from the Integration Flow Designer (IFD), stop the Event Server
and Event Server Monitor. If the Event Server is running when you create or update an .msl file, the
change is not recognized and the file does not appear in the Event Server Monitor. You must restart the
Event Server in order for changes to be recognized.

Steps for Using the Event Server

¨ Configure the Event Server from the Event Server Administration interface.
¨ Start the Event Server service.
¨ Configure an Event Server connection in the Management Console and begin viewing statistical data
from the process running. (From there you can also control the compound system that is running).
¨ To view the watches run dynamically and to take snapshots, use the Event Server Monitor.
¨To view snapshots captured by the Event Server Monitor, use the Snapshot Viewer.

Database Interface Designer

101. Specifying mdq’s in maps?


Dynamic and Static mdqs
Static: mdq will be specified in input path and database access happens only 1 time.
Dynamic: Through DBQUERY and DBLOOKUP

102. What is the use of Database Interface Designer


The Database Interface Designer is used to:

¨ specify the databases to use for a source or target


¨ define query statements
¨ automatically generate type trees for queries or tables

103. What are the steps to create a *.mdq file?


A database/query file contains the definitions for one or more databases as well as queries, stored
procedures, and other specifications that may contribute to the execution of a map. A database/query file
is a file you create and save using the commands on the File menu in the Database Interface Designer.
The result is a file with an extension of .mdq. This file name (including its path) appears in the title bar of
the Database Interface Designer window when it is the selected file in the Navigator, indicating that it is
the active .mdq file.

After starting the Database Interface Designer, the Navigator lists one or more .mdq files depending upon
whether you selected to create a new .mdq file or to open one or more existing files. When an .mdq file is
created, it appears in the Navigator next to the appropriate icon (). The default name is
Database_QueryFile that is followed by an assigned, sequential number. To change this name, from the
File menu, select Save As to save the new .mdq file, specifying the file name and location as desired.

Note Saving a new .mdq file uses the same procedure as saving an existing one.

Type Designer

1) What is a Type Designer?


The Type Designer is used to define, modify, and view type trees.

2) What is syntax?
The syntax of data refers to its format including tags, delimiters, terminators, and other characters that
separate or identify sections of data.

3) What is structure?
The structure of data refers to its composition including repeating sub-structures and nested groupings.

4) What is semantics?
The semantics of data refer to the meaning of the data including rules for data values, relationships
among parts of a large data object, and error detection and recovery.

5) What is a type tree?


· A type tree describes the syntax, structure, and semantics of your data.
· A type tree (.mtt) defines the entire contents of at least one input that you intend to map or one output
you intend to map.
· A type tree is the mechanism for defining each element of your data. Similar to a data dictionary, a type
tree contains a collection of type definitions.

6) Give small example.


A data file is a simple example. The file is made up of records and each record is made up of fields. In
this case, there are three kinds of data objects: a file, a record, and a field.

7) Mention the different type designer files.


Extension File type
.mtt Type tree file
.dbe Type tree analysis message file
.bak Backup type tree file

7) Mention the type designer icons.


Icon Description
Circle (blue) Item type
Circle (green) Sequence group type
Triangle Choice group type
rhombus Partitioned group
double plus Unordered group
circle (red) Category type

8) What are the different windows in type tree.


· Item Window
· Group Window
· Category Window
· Properties Window

9) Difference between Group and Category window.


· The group window is used to define the components of the group type. Group types represent objects
composed of other objects.
· The category window is used to define the components of the category type. category type that are used
for inheritance purposes, to make components available to other types.
· Group window has a rule column for entering component rules.
· Components of a category do not have component rules.

10) What is the use of properties window.


The Properties window is used to define and view the properties of the currently selected type. Each type
has properties that define the characteristics of that data object.

11) Mention the steps for creating type trees.


The following list outlines the process for creating type trees:
· Identify the data objects in your data and define each piece of data
that you intend to map.
· Create types for each data object in your data.
· Define the properties of each type.
· Create component lists.
· Define component rules, if needed.
· Define item restrictions, if needed.
· Analyze and save the tree.

12) What is a component.


A component is an object that is part of a larger object.

13) What is a data object.


· A data object is a complete unit that exists in your input or is built on output.
· A data object may be simple (such as a date) or complex (such as a purchase order).
· A data object is some portion of data in a data stream that can be recognized as belonging to a specific
type.

14) What is a type.


· A type defines a set of data objects that have the same characteristics.
· For example, the type Date may be defined as representing data objects in the form MM-DD-YY.
· The type Customer Record may be defined as representing data objects, each of which consists of a
Company, Address, and Phone data object.

15) What are the different classes of type.


· Item
· Group
· Category

16) What is an item?


An item type represents a simple data object that does NOT consist of other objects.

17) What is a group?


A group type represents a complex data object that consists of other objects.

18) What is a category?


A category type is used for inheritance and for organizing other types in a type tree.

19) When can two type trees be different.


Two type trees are considered different if:
· Any of the types are different.
· Any of the types that exist in one type tree does not exist in the second type tree.
· When the order of the subtypes within a subtype is different.

20) When can two types be different?


A type is different if:
· Any of the properties are different
· Any of the components are different
· Any of the restrictions are different

21) When can two restrictions be different?


A restriction is different if:
· The restriction value is different
· The description is different

22) What are the common properties for the Item types.
For Item types, properties define whether the item is text, a number, a date & time or a syntax value.
Properties include such characteristics as size, pad characters and justification.

23) What are the common properties for the Group types.
For Group types the properties are related to the format of that group. The format of a group may be
explicit or implicit. In addition, type properties include syntax objects that appear at the beginning or end
of the object as well as release characters.

24) Mention some basic type properties.


1. Name -> name of the type.
2. Class -> class of the type whether it is category, group or item.
3. Description -> to record a brief description of the type.
4. Intent -> indicates whether the type is general type or xml type.
5. Partitioned -> yes/no
6. Order Subtypes -> choose the method in which the subtypes of this will be added or viewed in the type
tree. Some of its methods are Ascending (default), Descending, Add First, Add Last.
7. Initiator -> a syntax object that appears at the beginning of a data object. Its properties are
none(default) , literal , variable.
8. Terminator -> a syntax object that appears at the end of a data object. Its properties are none(default) ,
literal , variable.
9. Release Characters - > it is a one byte character in the data indicating that the character(s) following it
should be interpreted as data, but not as a syntax object. The release character is not treated as data, but
the data that follows it is treated as actual data. Its properties are none (default), literal, variable.
10. Empty -> the empty property provides alternate type syntax for groups or items when they have no
data content. When the Empty property is specified for a type and there is no datacontent, the Empty
syntax appears. For example, this can be used for XML data that contains either start and end tags or an
empty tag.You can use the Empty type property instead of syntax object items to potentially improve the
type tree run-time processing time (during data validation).

25) Mention Item class properties.


1. Item Subclass -> Number/Text/Date & Time/Syntax
2. Interpret as -> Character/Binary
3. Presentation -> Integer/Float/Packed/BCD
4. Length (if binary only) ->1,2,3,4(default)
5. Size -> min/max
6. Separators -> yes/no
7. Pad -> yes/no value =0(default for num) value=space (default for char)
8. Padded to -> min content/fixed size length=0 (default)
9. Justify -> left(default)/right
10. Apply pad - > Fixed Group(default)/Any context

26 ) What do you know about binary text items and number items.
· Binary text items have content size and pad properties.
· Binary data is required to be sized or of a fixed size.
· Binary number items interpret the data as a byte stream.

27 ) What do you know about character text items and number items.
· Character text items can have content size and pad properties.
· Character number items interpret the data as symbolic data.

28) What is the content size.


The content size of a text item specifies the bytes of data, excluding any initiator, terminator, release
characters, and pad characters. It is also independent of the character set.

29) What is Padding or Pad property.


If the data value to be mapped to the target item is smaller than the minimum length of that item, pad
characters are used to pad the data to that minimum length. Input data may contain both content and pad
characters. Output data is built according to the pad definitions of the types.

30) What are the different Pad properties.


1. Pad > Value
2. Pad > Padded to
3. Pad > Justify
4. Pad > Apply Pad

1. Pad > Value -> To define the one byte pad character. The default value is 0.
2. Pad > Padded to -> To define whether the data item is padded to fixed size or to the minimum content
size defined for that item. The padded to length must be greater than or equal to the max size value.
3. Pad > Justify - > To specify whether the data is padded to left or right.
4. Pad > Apply pad -> To specify when to apply the pad character.

31) What do you know about syntax objects properties.


Syntax objects are characters that precede, separate or follow a particular data object.

32) What are item restrictions and how are they grouped.
Restrictions of an item type are the valid or invalid values for that item. For eg, the unit of measure field in
the data must be one of a set values : CN,BX,PK,BR. These values should be defined as “include”
restrictions of the item Unit of measure.
Item restrictions are grouped into three categories Value, Character and Range.
33) What are the rule properties for Value restrictions.
Include and Exclude.

34) What are the rule properties for Character restrictions.


Include First, Include After, Exclude, Reference String.

35) What are the rule properties for Range restrictions.


Include Minimum, Include Maximum, Description and Exclude Minimum, Exclude Maximum, and
Description.
36) What is a sequence group type.
A sequence group is a group in which all components are sequentially ordered.
Each component of a sequence group is validated sequentially. It is mainly used when we want to get the
output in the same order as the input.
For e.g., in any EDI input file the elements of the control segments (ISA, GS, ST) are sequentially
ordered.

37) What is a choice group type.


Choice groups provide the ability to define a selection from a set of components like multiple-choice
questions on a test. A choice group is validated as only one of its components. Validation of choice group
is attempted in the order of components until a single component is validated. If the choice group has
initiator, the initiator is validated first. Choice groups have no partition or format properties. Components
of a choice group must be distinguishable from each other. The components of a choice group cannot
have a component range other than (1:1). Only one component of a choice group is built in the data.
For e.g., the data type Record is a group subclass of choice. The group type record has three
components Order, Invoice and Sales. The data validation of record will be only one of the components
Order, Invoice or Sales.
38) What is unordered group type.
An unordered group has one or more components that can appear in the data stream in any order.
Unordered groups have no partioned property. They have implicit format properties with syntax as none
and delimited. When a group is defined as unordered, any component can appear in the data stream. A
component can be an item or a group.
Unordered group components have a range property. For example, if the unordered group, A, has the
following component list: B (1:S) C D (S) then A must have one C, at least one B, and possibly some Ds.
They could appear in any order. For example, data for A could have the pattern: CDDBDDD or
BBBDDCBD.

39) Difference between choice group and partitioned group.


A choice group can have both items and groups as components whereas a partioned sequence group
can have only group sub types.
40) Difference between sequence and choice groups.
Choice groups have no Partition or Format properties.

41) Difference between explicit and implicit formats.


The explicit format relies on syntax to separate components. Each component can be identified by its
position or by a delimiter in the data. Delimiters appear for missing components.
The implicit format relies on the properties of the component types. The format is not fixed. If delimiters
separate components, they do not appear for missing components.
42) What is Fixed Syntax.
Fixed Syntax is the property for which a group data object has always the same size. Each component of
a fixed group must be fixed. If you break down a fixed group, it ultimately consists of items that are fixed.
Each is padded to a fixed size or its minimum and maximum content size are equal. Do not specify the
size of a fixed group. The size is automatically calculated based on the size of the group’s components.

43) What is Explicit Delimited Syntax.


An explicit format group with a delimited syntax is one whose components are separated by a delimiter
and the delimiter appears as a placeholder even when a component has no content. The only time a
delimiter can be missing is if components following the delimiter are all optional and there is no data for
these optional components.

44) What is Floating Component?


The floating component represents an object that may appear after any component of the group. An
implicit group can have a floating component; an explicit group cannot. If the group is prefix or infix
delimited, the floating component appears before the delimiter. If the group is postfix delimited, the
floating component appears after the delimiter. A floating component can be an optional component that
may appear after any other component. However, it is not included in the component list because it does
not appear at a specific location. If a group has a floating component, a component must be
distinguishable from a floating component. For example, components and floating component could start
with different initiators.

45) What is Implicit Delimited Syntax.


If a delimiter separates the components of a group, but the delimiter does not appear when a component
is missing that group has an implicit format, with a delimited syntax.

46) What is a component range. Give some examples.


The range defines the number of consecutive occurrences of that component. A component range can be
specified for any component. A component range defines the number of occurrences.
Syntax: component (min: max)
Examples: 1) Record (1:s)
2) Record (0:s) or Record(s)
3) Record (1:6)
4) Record (8)

47) What is a component rule? Give some examples.


A component rule is an expression about one or more components. It indicates what must be true for that
component to be valid. For given data, it evaluates to either TRUE or FALSE. A component rule is similar
to a test. If the data does not pass the test, it is invalid.
Component rules are used for validating data.
Examples: 1) Quantity < 1000 2) When (PRESENT (Address Field), PRESENT (Qualifier Field)) 3)
Ammount = Quantity * Price 48) How can you enter an object name into the component rule. This can be
possible by pressing ALT and dragging the object into the edit window. 49) What is the $ symbol in the
component rule. The $ symbol is the short hand notation and it represents the current object or
component itself. 48) Mention the component attributes. Component attributes are toggle commands.
Three attributes can be assigned to a component in a component list: · Identifier · Restart · Sized 49)
What is an Identifier Attribute. The identifier attribute can be used on a component of a group. The
identifier indicates the components that can be used to identify the type to which a data object belongs.
All the components, from the first, up to and including the component with the identifier attribute, are used
for type identification. 50) What is Restart Attribute. To continue processing your input data when a data
object of a component is invalid, assign the restart attribute to that component. 51) What is Sized
Attribute. The sized attribute is used on a component in which the value specifies the size (in bytes) of the
component immediately following it. The sized attribute can be used on more than one component of a
group. 52) What is Partitioning in Mercator and when it is required. Partitioning is the method of
subdividing objects into mutually exclusive subtypes. Partitioning is required when components are
randomly or partially ordered. Partitioning is also used to simplify the rules needed in the map or to build
additional logic into the definition of your data. 53) Explain the concept of partitioning with good example.
Consider an Employee type tree. It is having multiple departments. Development, Testing, Production and
maintenance. In the rule without partition, you would specify a condition for each department abbreviation
in each department. This could make your mapping rules long, difficult to read and difficult to maintain.
Map rule without partitioning =IF (Department Field:.: Input = “DE” Department Field:.: Input = “DB”
Department Field:.: Input = “DA” Department Field:.: Input = “DS” , F_MapDevelopment (Record: Input),
NONE) Map rule with partitioning The mapping rule with partition is more concise, self-documenting and
easier to maintain. Here we set the Department partition properties to YES. =F_MapDevelopment
(EXTRACT (Record:Input, PARTITION (Department Field :.:Input, Development))) 54) What are the
benefits of Partitioning. · The rule is shorter than the rule without partitioning. · It is easy to read this map
rule and understand the mapping function being performed. · The partitioning method is easier to
maintain. · Partitioning using a restriction list used with Ignore Case setting eliminates the need of
PROPER, LOWERCASE, UPPERCASE functions. 55) What are the three methods to partition items in
type trees. · Initiators. · Restrictions. · Format. 56) What are the three methods to partition Groups in type
trees. · Initiators. · Identifiers. · Component Rules. 57) How can you validate the type tree definition or
data type definition. During map execution, the input data is compared to the data definition in the type
tree. If the data does not match the definition, it is invalid or in error. To validate a data object as
belonging to a certain type, the data must be matched to its type definition. For the data to be valid, the
following must be true. · The data must have the properties defined in the properties window. · If the type
is an item that has restrictions, the data object must match one of the restrictions. · If the type is a group,
the components of the data object must match those defined in the group window, and each component
rule must evaluate to TRUE at map execution time. 58) What is UFO. UFO(Unidentified Foreign Obj) is
data in error with no valid data contained within it. 59) How can you handle errors in the data stream. The
restart attribute provides instructions for handling errors encountered in a data stream. If you are mapping
data of a component with a restart attribute only the valid occurrences of that component are mapped. For
example, your input is a file of records and the record component of File has the restart attribute and
some of the records are invalid. When you map the Record objects, only the valid records are mapped.
To map the invalid records use the Reject function. 60) How the Restart Attribute Works. The restart
attribute has the following properties · During validation, the restart attribute is used to tell the system
where to start over when an UFO is encountered in the data. All unrecognized data is considered to be an
error of the type with the restart assigned. · The restart attribute is used during validation to mark both
UFOs and existing data objects in error that are ignored when mapping input to output. · The restart
attribute is used to identify valid data objects that contain objects in error. · If an invalid data object is a
component that does not have the restart attribute, that component is marked in error. · If components,
from the beginning of the data are marked in error because none of them has a restart, the system stops
after validation. It does not map the input data to the output. 61) What is Type Tree Analyzer. The Type
Tree Analyzer analyzes type definitions and ensures internal consistency. The Analyzer checks your data
definitions for logical consistency. It does not compare your definitions to your actual data. The resulting
analyzer messages indicate whether your type tree definitions are acceptable or whether they match your
data. 62) What is Logical Analysis. Logical Analysis addresses the integrity of the relationships that you
define. Logical Analysis detects the following · Undefined components · Components that are not
distinguishable from one another · Item restrictions that do not match the properties of that item · Circular
type definitions · Checks delimiters relationships to each other and to components · Undefined inherited
relationships · Logical errors contained in component rules 63) What is Structural Analysis. Structural
Analysis addresses the integrity of the underlying database. It may be able to detect and possibly correct
defects caused by system environment failures.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy