Workday® - PRISM
Workday® - PRISM
1 | Prism Analytics
You need to create a Tenant Management case with Workday and create a Tenant - Setup Feature request to provision Prism
Analytics hardware for your tenant.
Context
Set up Prism Analytics to enable data administrators and data analysts to create tables and datasets for blending Workday and
non-Workday data that data analysts can analyze inside Workday.
Steps
Example: You can define these security groups (administered by the Security Configurator group):
Select the Enabled check box for the Prism Analytics functional area.
Workday suggests that you use the security groups you created.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 1/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prism Datasets: Create Prism Data Administrator (Prism Access), Prism Data View and Modify
Writer (Prism Access)
Prism Datasets: Manage Prism Data Administrator (Prism Access) View and Modify
Prism Datasets: Owner Prism Data Administrator (Prism Access), Security View and Modify
Manage Administrator (Prism Access)
Prism Datasets: Publish Prism Data Administrator (Prism Access), Security View and Modify
Configurator (Prism Access)
Prism: Manage Data Security Configurator (Prism Access) View and Modify
Source
Prism: Manage Relax Prism Data Administrator (Prism Access) View and Modify
Sharing
Prism: Tables Create Prism Data Administrator (Prism Access), Prism Data View and Modify
Writer (Prism Access)
Prism: Tables Manage Prism Data Administrator (Prism Access) View and Modify
Prism: Manage Prism Data Administrator, Prism Data Writer View and Modify
Connection
Prism: Manage File Employee as Self, Contingent Employee as Self (optional) View and Modify
Containers
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 2/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Security in Prism Analytics
Tasks
Create Prism Access Security Groups
Reference
Community article: Setup Feature Tenant Request
2021R1 What's New Post: Dataset Viewer
2021R1 What's New Post: Prism Analytics Security Domains
Context
Set up your tenant so you can install and configure analytic data sources, and make them available for data analysts to use.
Steps
Related Information
Concepts
Concept: Workday-Delivered Analytic Data Sources
Tasks
Steps: Set Up Tenant for Prism Analytics
Steps: Enable Functional Areas and Security Policies
What are the steps involved in going from raw data (both internal and external to Workday) to visualizations and reports in
Workday? What skills do you need to perform each step? This section explains each phase of the data workflow from transforming
data to analyzing it.
You bring in data by creating a table. A table is a Prism Analytics object that stores (materializes) data and represents it in a tabular
format. A table has a schema and contains data that's valid against the schema.
After you create a table, you create a data change task to load or change data in the table.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 3/200
3/11/23, 9:33 PM Workday® Administrator Guide
You can also create a base dataset to bring in data. However, Workday recommends creating tables only.
Example: You create a table to contain external operational data, and then create a data change task that loads that data from a
delimited file on an external server into the table. You create a schedule for the data change task to run on a recurring basis.
Example: You create a table and data change task using data in a Workday custom report.
If you're familiar with ETL workflows (extract, transform, and load), tables and data change tasks together encompass the extract
logic.
Edit a Table
Derived datasets contain information on how to process, blend, and transform the data you import into them. Transforming data is
an iterative process and typically involves creating multiple derived datasets based on other tables and derived datasets. You can:
Example: You can create a derived dataset that aggregates data into groups using a Group By stage, and then create another
derived dataset based on this dataset that joins the aggregated data with another dataset using a Join stage.
Example: You can create a Prism calculated field to calculate the point of sales revenue per store.
From the perspective of an ETL workflow, the derived dataset encompasses the transform logic.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 4/200
3/11/23, 9:33 PM Workday® Administrator Guide
Before you make any data in the Data Catalog available for analysis, you need to apply security to the data using the existing
Workday security domains. Applying security to the data you plan to make available for analysis enables you to take advantage of
Workday's strong, configurable security model for external data as well as Workday data.
You configure the data security by editing the data source security on the dataset or table, but Workday applies the security to the
data in the form of a Prism data source (see next phase).
The way you create a Prism data source depends on the Data Catalog object:
Table. Edit the table schema and select the Enable for Analysis option.
Dataset. Publish the dataset. On the View Dataset Details report, select Quick Actions > Publish.
Workday applies the security domains configured in the data source security for the table or dataset, or it applies the Prism: Default
to Dataset Access security domain if no data source security was configured. The Prism: Default to Dataset Access domain
provides contextual access to a Prism data source based on your access to the underlying table or dataset.
You can use Prism data sources in visualizations and reports like any Workday delivered data source.
From the perspective of an ETL workflow, enabling a table for analysis and publishing a dataset are the load part of the process.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 5/200
3/11/23, 9:33 PM Workday® Administrator Guide
Where can I read more details? Concept: Making Prism Data Available for Analysis
Edit a Table
Related Information
Concepts
Concept: Security in Prism Analytics
Tasks
Steps: Create a Table Manually
Steps: Create a Dataset with External Data (SFTP Server)
Steps: Create a Dataset with External Data (Upload a File)
Steps: Create a Dataset Using Workday Data
Steps: Create a Derived Dataset
Edit Prism Data Source Security
Publish a Dataset as a Prism Data Source Manually
Create Dataset Publish Schedules
Reference
The Next Level: Prism Analytics Community Guide
The Next Level: Move up the Workday Maturity Curve: Considerations in Defining an Analytics Strategy
The Next Level: Create a Vaccine Management Solution Using Prism and Discovery Boards
The Next Level: Estimating Prism Projects - Creating an Estimator Framework
The Next Level: Scoping a Prism Project
As an advanced report. Tables and data change tasks don't support importing other report types, such as matrix or
composite.
As a web service.
For Prism Analytics. Select the Enable for Prism check box in the Advanced tab of the custom report.
Fields from a related business object that have a many to 1 relationship with the primary business object.
Prompts, except for optional prompts and required prompts with a default value set.
Note: If your custom report includes a Currency field, you must select the Show Currency Column check box in the Field Options
column for the Currency field, located in the Columns tab of the report.
To optimize performance when running a data change task, consider these options:
Datasets
To create a dataset from a Workday report, the report must be configured:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 6/200
3/11/23, 9:33 PM Workday® Administrator Guide
As an advanced report. Tables and data change tasks don't support importing other report types, such as matrix or
composite.
As a web service.
For Prism Analytics. Select the Enable for Prism check box in the Advanced tab of the custom report.
Workday doesn't support selecting reports when the report includes:
Fields from a related business object that have a many to 1 relationship with the primary business object.
Note: If your custom report includes a Currency field, you must select the Show Currency Column check box in the Field Options
column for the Currency field, located in the Columns tab of the report.
To optimize performance, consider these options:
Related Information
Tasks
Steps: Create Table from a Workday Report
Steps: Create a Dataset Using Workday Data
Prism data sources are different than Workday-delivered data sources in several ways. Prism data sources:
Prism data sources also have their own security. Security group access controls who can see a Prism data source.
Advanced. Some features aren't supported, such as Subfilters. Workday only displays features that it supports. Also, only
add a field from a related business object when necessary. Fields from related business objects might impact report
performance.
Composite. The Prism data source must include at least 1 Instance field.
Matrix. Most features are supported. Lookup Prior Value isn't supported.
Simple. All features are supported.
Transposed. All features are supported.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 7/200
3/11/23, 9:33 PM Workday® Administrator Guide
Arithmetic Calculation
Build Date
Concatenate Text
Convert Text To Number
Date Constant
Date Difference
Evaluate Expression
Format Date
Format Number
Format Text
Increment or Decrement Date
Lookup Date Rollup
Numeric Constant
Substring Text
Text Constant
Text Length
True/False Condition
Workday only makes available calculated fields that are supported for Prism data sources.
Calculated fields only work on supported field types. Example: date-related functions work only on fields of type Date.
You can't use Instance or Multi-Instance fields in some calculated fields, such as Format Text and Substring Text. This is
because Instance and Multi-Instance fields in Prism data sources only include the unique identifier information (also known
as a WID), not the display name.
Related Information
Concepts
Concept: Prism Analytics Data Management Workflow
Concept: Deleting Prism Data
Tasks
Unpublish a Dataset
Delete Rows from a Prism Data Source
Reference
The Next Level: Prism Analytics Community Guide
Create a table or dataset in the UI by uploading a file. Workday creates and uses a bucket.
Create a data change task in the UI by uploading a file. Workday creates and uses a file container.
Run a data change task in the UI by uploading a file. Workday creates and uses a file container.
Upload a file to a file container using the REST API.
Create a bucket using the REST API.
Upload a file to a bucket using the REST API.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 8/200
3/11/23, 9:33 PM Workday® Administrator Guide
Limit Value
Maximum number of buckets that can be created in a 24-hour rolling period 1,000
Maximum number of data change activities and bucket completions that can be run in a 1,000
24-hour rolling period
Maximum number of files in all file containers and buckets in a 24-hour rolling period 50,000
Maximum size of all files in all file containers and buckets in a 24-hour rolling period 125 GB compressed
Related Information
Tasks
Steps: Create a Table by File Upload
Create a Data Change Task
Steps: Create a Dataset with External Data (SFTP Server)
Steps: Create a Dataset with External Data (Upload a File)
Upload a New File to a Dataset
A table is a Workday Prism Analytics object that stores (materializes) data and represents it in a tabular format. A table has a user-
defined schema and only contains data that's valid against the schema. The data in tables is backed by a distributed columnar data
store.
You create tables to bring in data from multiple sources and store it in a central location, the Data Catalog (similar to a data
warehouse). You can then join, transform, blend, and enrich table data using derived datasets based on the table. Use derived
datasets to prepare data for analysis.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 9/200
3/11/23, 9:33 PM Workday® Administrator Guide
Schema
You define the table schema before you load Base datasets control underlying data stored in
data into the table. If you’re familiar with files. Dataset schemas describe how to read
databases, this is commonly referred to as (and later transform) the data stored in those
schema-on-write. files. If you’re familiar with databases, this is
commonly referred to as schema-on-read.
You can still change the table schema, but there
are requirements and limitations. Example: You Although you can change the dataset schema
can add/remove fields at any time, but if you at any time, you must make sure that the
want to change the field type, the table must be schema matches the data in the underlying files
empty (zero rows). Schema changes are so that it recognizes the data correctly. If the
destructive. Example: if you remove a field, you schema doesn't match the file, then Prism sets
lose all data contained in the field. value as null.
Source type
Tables can accept data from any type of source A base dataset only accepts data from the
at any time, such as file upload, a dataset, or same source type that you used when you
REST API. created the dataset. That is, if you create a
dataset using SFTP, it will only accept data from
Create data change tasks to load data into and
an SFTP server.
change data in a table. You can use a different
source type for each data change task that
works on the same target table.
Data validation
When you load data into a table, Workday When you publish a dataset, Workday reads the
validates the data against the defined schema. data stored on disk and validates it against the
current schema of the base dataset.
If the value for a field doesn’t match the field
type or other field parameters (such as date If the value for a field doesn’t match the field
format), then Workday marks the entire row as type or other field parameters (such as date
invalid and doesn’t include the row in the table. format), then Workday marks that field value as
Instead, the row is sent to an error file that you NULL and includes the row in the published
can download. Prism data source.
NULL handling
Every field allows NULL values unless you Every field allows NULL values.
configured it as required.
Workday doesn’t distinguish between NULL
Workday distinguishes between NULL values values and empty string values in delimited
and empty string values in Text fields when files.
reading a delimited source file to load data into
a table.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 10/200
3/11/23, 9:33 PM Workday® Administrator Guide
Name restrictions
Tables have 2 names:
Name. This is a display name that Name. This is the name that displays in
displays in the Data Catalog. You can the Data Catalog. You can change the
API Name. This is a unique name used discovery boards that use the original
to reference the table in the API. You names.
can’t change the table API name after
you create the table. API Name. This is a unique name used
to reference the dataset in the API. You
can't change the dataset API name after
Table fields also have a display name and an you create the dataset.
API Name. You can change the display name at
any time. However, you can’t change the API
name after you create the field and save the
table.
Dataset fields only allow 1 name and you can
Display, API, and field API names must be change it at any time. But you must ensure to
unique and conform to the name validation fix any downstream errors that might result
rules. from changing the field name.
See Reference: Naming Guidelines for more Display and API names must be unique and
Deleting data
You can remove all rows (truncate) or some You can remove all rows (truncate) from a base
rows (delete) from a table. dataset. You can’t delete a subset of rows.
Make Prism data Use the Enable for Analysis option when you Publish the dataset to create a Prism data
source create or edit the table schema to create a source.
Prism data source.
Row count Workday knows exactly how many rows of data Workday doesn’t know how many rows of data
exist in a table, and it displays the number of exist in a base dataset.
rows in the Data Catalog and on the View Table
Details report.
Related Information
Concepts
Concept: Datasets
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 11/200
3/11/23, 9:33 PM Workday® Administrator Guide
Reference
2020R1 What's New Post: Prism Analytics Tables
Reference: Naming Guidelines
When you load data into a table, Workday validates the data against the defined schema.
If the value for a field doesn’t match the field type or other field attributes (such as date format), then Workday marks the entire row
as invalid and doesn’t include the row in the table. Instead, Workday sends the row to an error file that you can download.
Use the error file to get a list of all rows that failed to load into the table. You can fix the errors in the data, remove the extra fields
that Workday adds, and load the fixed data into the table.
Is a CSV file.
Includes all fields defined in the table schema plus fields for troubleshooting:
Error Code
Error Message
Includes all failed rows up to a maximum of 10,000 rows. If there are more than 10,000 error rows, then Workday rejects the
load with a status of Failed.
To download the error file, access the Activities tab on the View Table Details report. Click the download icon for an activity that
included some errors.
Related Information
Reference
Reference: Table Error File Error Codes
A dataset is a Prism Analytics object that controls some underlying data and describes some processing logic to manipulate the
data. A dataset is a description of the data, otherwise known as metadata. It contains all of the data about the data, plus a subset
of example rows to help you understand the data values. You create datasets to prepare data for analysis.
A dataset can describe either Workday or non-Workday (external) data. You might want to create a dataset using external data to
blend, transform, and enrich it with Workday data. This enables you to analyze your Workday and non-Workday data together
without having to export it into a separate electronic data warehouse and business intelligence (BI) application.
A dataset can also describe data that's output from another table or dataset by deriving the dataset from an existing table or
dataset.
Base dataset. A base dataset is a dataset that is based on 1 or more source files. The source files can come from external
sources (non-Workday data) or from the output of a Workday custom report. You create a base dataset when you create a
new dataset using these options:
from File. This option creates a dataset with external data that you upload in the browser.
from SFTP. This option creates a dataset with external data that Workday retrieves from an SFTP server using an
integration.
from Custom Report. This option creates a dataset with Workday data.
Derived dataset. A derived dataset is a dataset that is based on 1 or more existing tables or datasets. The source data of a
derived dataset comes from the output of existing tables and datasets. You use derived datasets to blend data together
from different sources, such as Workday data and non-Workday data. Some stage types, such as Join and Union, are only
available to derived datasets. You create a derived dataset when you create a new dataset using the Derived Dataset
option.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 12/200
3/11/23, 9:33 PM Workday® Administrator Guide
The Edit Dataset Transformations task is where you make changes to the dataset definition to manipulate data. This task acts like
a single page application, meaning you can interact with elements on the task dynamically as if it were its own application. You can
also make all the changes you want, such as adding Prism calculated fields and stages. When you're done, click Save to save all
changes to the dataset.
If you have permission to edit a dataset, you can access the Edit Dataset Transformations task using these methods:
Right-click the dataset name on the Data Catalog report and select Edit Transformations.
Select Edit Transformations from the Quick Actions on the View Dataset Details report.
Access the Edit Dataset task and select the dataset name that you want to edit.
When creating a dataset for the first time, the workflow leads you to the Edit Dataset Transformations task.
When you view the Edit Dataset Transformations task, you see these components:
1. Pipeline list. (Derived datasets only.) Click Change Pipeline to view a collapsible panel that lists all tables and datasets that
you've imported into the derived dataset. Importing a table or dataset creates a new pipeline. You can also use this panel to
add a new pipeline by importing an additional table or dataset. When you select an item in the pipeline list, the pipeline
details panel displays the details for that pipeline.
2. Pipeline details panel. This panel displays every stage in the dataset pipeline, starting with the first stage that created the
pipeline. For base datasets that can be an Import or Parse stage depending on where the source data comes from. For
derived datasets, it's an Import stage. You can:
Add, edit, and delete stages.
Manage dataset fields.
View stage descriptions.
View the number and names of Prism calculated fields in a stage.
Collapse this panel to increase the available space in the example data table.
3. Example data table. The example data table takes up most of the space on the Edit Dataset Transformations task. It
displays the current view of the data (records and fields) for the output of the currently selected pipeline stage.
4. Inspector panel. This panel displays when you select a field in the example data table. You can hide this panel to increase
the available space in the example data table. This panel has these tabs:
Field Info. This tab displays detailed information about the selected field, including statistics on the values in the
field. All statistics are based on the data currently shown in the example data table. Therefore, to get more precise
numbers, increase the number of example rows.
Functions Library. This tab displays the functions that you can use in a Prism calculated field expression, including
description, syntax, and an example. You can search for a specific function. You can click the + icon next to the
function name to insert the function at the current location in the Prism calculated field expression.
5. Prism calculated field expression bar. Click Add field to add a new Prism calculated field, and then enter the field
expression. You can:
Use the expression bar later to edit the field expression.
Expand the expression builder to create and view multiline expressions.
Edit the field name in the inspector panel.
6. Stage statistics and search bar. For a selected stage, you can see the number of:
Fields
Prism calculated fields
Field-related errors
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 13/200
3/11/23, 9:33 PM Workday® Administrator Guide
You can see the ID for stages. The stage ID is a unique ID that Workday assigns to the stage based on the order you added
the stage to the dataset. For each stage you add, the stage ID increases.
If you use Microsoft Excel, there are potential limits on the number of characters your cells can contain when you download
all values.
7. Download example data.
8. Search for a field and navigate to it directly.
9. Example data controls. Use this menu to change the number of example rows to display in the dataset by selecting a new
value.
10. Table and list view. You can view the example data table as a table or a list. The field list view navigator enables you to see
distinct, null, median, and top values at a glance. When you edit a new dataset, the default view is table view. Each time you
change the view, the last view you select becomes the default view.
11. Edit dataset details. Use the configuration icon to open a pop-up where you can:
Change the dataset display name.
Create and edit tags.
Edit the dataset description.
View the dataset API name.
12. Dataset actions menu. Use this menu to quickly access the View Dataset Details and View Dataset Lineage reports.
Related Information
Concepts
Concept: Dataset Stages
Concept: Dataset Pipelines
Concept: Prism Expression Language
Tasks
Change the Dataset Example Rows
Reference
W33 What's New Post: Field List Navigator
2021R1 What's New Post: Dataset Viewer
The Data Catalog report is your starting point for using Workday Prism Analytics. The report displays the data available to you. If
you have the appropriate permissions, you can:
Bring in and store data from multiple sources using either tables or base datasets.
Create derived datasets to transform the data.
Create data change tasks to change data in a table using data from a source.
When you view the Data Catalog report, you see these components:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 14/200
3/11/23, 9:33 PM Workday® Administrator Guide
You can also view tags assigned to a dataset or table from the inspector panel. Clicking a tag in the panel displays the
datasets or tables filtered by the tag.
4. Search for objects to filter the list of objects in the Data Catalog.
5. The number of objects displayed.
6. Tag filter. Click the tag icon to filter your datasets and tables by tags.
You can add or edit tags to organize your data if you have editor permission when you:
Create a dataset by file upload.
Create a derived dataset.
Create a table.
View a dataset or table.
Edit a table.
Note: Tags you create are visible to all.
If you remove a tag from a dataset or table, the tag still displays in the tag menu if another dataset or table uses the tag.
If you change browsers or laptops, the tag filters you select in the Data Catalog report won’t persist.
Workday doesn’t store tag names but rather a randomly generated identifier for each tag. This is consistent with how
Workday stores other filters in Data Catalog.
Workday provides the Prism Datasets and Tables data source to help you create reports and discovery boards about Prism
Analytics tables and datasets. Although this is an indexed data source, none of the fields in it are indexed. As a result, it behaves
like a standard data source.
Schemas are the structures behind how your data is organized. A dataset might consist of multiple schemas that you define when
creating the dataset. Each of these dataset components has its own schema:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 15/200
3/11/23, 9:33 PM Workday® Administrator Guide
The files containing external data that you upload into a dataset.
The custom report you create to bring Workday data into a dataset.
The input to a pipeline stage. For the first stage in a base dataset, this schema is determined by the dataset source.
The output from a pipeline stage. For the last stage in the Primary Pipeline, this schema determines the output schema of
the entire dataset.
Schema changes can happen anywhere in a dataset. Schemas change when you add new fields or remove existing fields, such as
when you:
Where changes occur in a dataset can affect other dataset components. Changes to dataset schemas can break components that
were based on the original schema. Where changes occur can also affect how you manage changes.
Note: When you change the schema and Workday imports the new source file into the dataset, you must open the dataset and save
the changes. If you don’t save the dataset, Workday continues to use the old schema definition. The next time you publish the
dataset, you might get inconsistent data in the Prism data source.
When you import data that changes the schema of your dataset from an SFTP server or custom report, you must manually edit the
dataset to incorporate the changes. If new fields are added to the dataset, Workday hides them. Expose the new fields and save the
dataset to include the fields in the dataset schema.
Workday recommends that you ensure that the dataset is up to date and includes the fields you want. In most cases, you use the
Manage Fields stage to manage dataset schema changes.
Manage Fields
The Manage Fields stage uses the output of the previous stage as a baseline from which to monitor changes. When this baseline
changes, Workday warns you in the pipeline details panel. If the baseline changes, in the Manage Fields stage, Workday:
Add a Manage Fields stage at the beginning of the Primary Pipeline of a derived dataset when you want to monitor the
schema of the table or dataset from which it’s derived.
Add a Manage Fields stage at the end of the Primary Pipeline of a dataset that you intend to publish. This enables you to
detect any schema changes that might break reports that use the Prism data source of this published dataset.
Add a Manage Fields stage at the end of a pipeline when you need to hide fields or expose new fields.
Add a Manage Fields stage in a base dataset to ensure that no future integration can unintentionally remove an existing
field.
Include no more than 2 Manage Fields stages in a single pipeline.
In some cases, Workday handles schema changes without using a Manage Fields stage. Example: When uploading a new version
of a source file into a dataset, Workday handles schema changes based on the source file header row:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 16/200
3/11/23, 9:33 PM Workday® Administrator Guide
Yes Workday uses the field names in the header row of the
dataset to determine which fields are in the new source file.
You can only add new fields at the end of the source file. If
you do so, Workday updates the schema of the dataset into
which you're importing the new source file.
Related Information
Tasks
Manage Dataset Fields
Steps: Create a Dataset with External Data (SFTP Server)
Steps: Create a Dataset Using Workday Data
Reference
The Next Level: Prism Analytics Best Practices
Datasets contain 1 or more pipelines. A pipeline is a container of stages that models the flow of how data should be transformed.
It consists of an ordered list of stages, each of which define how to modify the data at that point in the pipeline. Pipelines can
contain 1 or more stages.
Base datasets contain 1 pipeline, and derived datasets can contain 1 or more pipelines. Every dataset has a Primary Pipeline.
The first stage in a pipeline brings in data from the dataset source. Stages listed after the first stage in a pipeline take the output of
the previous pipeline as the input to the current stage. If you're familiar with ETL workflows (extract, transform, and load), each
stage is 1 step in a development pipeline.
The last stage of any pipeline is the output for that pipeline. The output of the Primary Pipeline is the output of the entire dataset.
Therefore, when you publish a dataset, the output of the Primary Pipeline will be materialized as the data in the Prism data source.
Related Information
Tasks
Add a Stage to a Dataset
A stage is a pipeline object that takes in data, transforms it in some way, and outputs modified data. A stage performs a single
computational function, such as joining data with a Join stage or parsing data from the dataset source with a Parse stage.
Import
Parse
Manage Fields
Filter
Group by
Join
Union
Unpivot
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 17/200
3/11/23, 9:33 PM Workday® Administrator Guide
Adding a stage can change the number of records and fields in the dataset.
Deleting a Union or Join stage disconnects but keeps the stage's pipelines. If you don't want to keep the disconnected pipelines,
delete or use them in another Join or Union stage in the Primary Pipeline.
Related Information
Tasks
Add a Stage to a Dataset
Reference
Reference: Dataset Stages
The Unpivot stage enables you to arrange the data in derived datasets in a way that is more meaningful to you. You can convert
fields (columns) to rows. The unpivot process consolidates data from 2 or more similar fields into a pair of new fields:
A field created from the original field names (referred to as Input Fields)
A field created from the original field values (referred to as Output Values)
Workday populates the new fields with the values you specified for the unpivot.
Workday repeats row values in the input fields you didn’t include in the unpivot.
You can:
Workday recommends that you create multiple pairs of new fields in a single Unpivot stage rather than 1 pair of new fields in many
Unpivot stages.
Related Information
Reference
Reference: Unpivot Stages
2020R2 What's New Post: Unpivot Stage
Examples
Example: Unpivot Stock Vesting Data in a Dataset
You can see where a dataset field originates in pipelines for derived datasets. When tracing a field’s origin, Workday searches for
the location where the field is first introduced. Workday displays the most upstream field that you have permission on. To see the
first occurrence of a field, you must have permission to view or edit transformations on all the datasets in the field’s lineage
including its first occurrence.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 18/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Tasks
View Field Lineage
You can see the lineage of a dataset field and all transformations involving the field across different datasets. You can also trace
the lineage of a calculated field and all dependent fields used in it. When tracing the lineage of a field, Workday displays the stages
involving the field across all datasets or tables that you have permission on. Specific stage information for each stage is displayed
in the inspector panel.
Related Information
Tasks
View Field Lineage
Reference
2022R2 What's New Post: Field Lineage for Prism Analytics
Each table and dataset field has a field type attribute. The field type is often referred to as a data type in other data applications.
When you create a table, you define the field type of each field. When you load data into a table from a delimited file, Workday
validates the data against the defined schema. If the value for a field doesn’t match the field type or other field parameters (such as
date format), then Workday marks the entire row as invalid and doesn’t include the row in the table. Instead, Workday sends the row
to an error file that you can download.
When you create a dataset using external data, Workday attempts to guess the field type of each field by examining some of the
data. However, you can change the field type if Workday assigned the wrong field type or if you want to apply a different field type.
Note: If you change the input field type in the last stage before a Manage Fields stage, the new field type must be compatible with
the changed field type.
When you create a dataset from a custom report, Workday maps the Workday field types to dataset field types.
For Prism calculated fields, the expression result determines the field type for that field. If a Prism calculated field expression is
TO_INT(zipcode), then the field type for that field is Integer.
Note: Workday recommends using the Numeric field type in datasets where possible. When you use a different numeric field type,
such as Double, you risk losing precision and getting erroneous results depending on the data and calculations in the dataset. In
base datasets, set the field type as Numeric(x,y). You can also change the field type in a derived dataset using the CAST function.
Example: CAST([salary] AS decimal(12,4))
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 19/200
3/11/23, 9:33 PM Workday® Administrator Guide
Currency A variable length decimal value that Currency values can represent any
supports a maximum of 20 digits valid positive or negative value given
before the decimal point and a the specified number of digits before
maximum of 6 digits after the decimal and after the decimal point.
point, combined with a Workday-
recognized, 3-digit currency code.
Date Date combined with a time of day with Date range: January 1, 1753, through
fractional seconds based on a 24-hour December 31, 9999
clock.
Time range: 00:00:00 through
23:59:59.997
Multi-Instance A Workday-specific field type that Zero or more hexadecimal values that
contains a set of (zero or more) reference a WID.
Workday Instance values. Each field
also retains information about the
business object the Instance values
are based on. Multi-Instance fields
represent a 1-to-many relationship
between 2 objects.
Numeric A variable length numeric value that Numeric values can represent any
supports a total of 38 digits before and valid positive or negative value given
after the decimal point, but a the specified number of digits before
maximum of 18 digits after. and after the decimal point.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 20/200
3/11/23, 9:33 PM Workday® Administrator Guide
Text Variable length non-Unicode text (also Maximum text length of 2,147,483,647
known as string) data. characters.
Creates a field in the table or dataset for every report field that uses a supported field type.
Assigns a field type to the fields in the table or dataset.
Workday only retains report fields that have field types that tables and datasets support. Workday doesn't include any field that
uses an unsupported table or dataset field type.
After creating a table or dataset from a custom report, verify that the assigned field types are correct for your table or dataset
needs. Example: You might need to change a field type from Numeric to Integer to match the field type when joining the dataset
with another dataset.
This table explains how a report field type maps to a table or dataset field type when you create a table or dataset.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 21/200
3/11/23, 9:33 PM Workday® Administrator Guide
Numeric Numeric, Integer, or Long The field type in the table or dataset
depends on the number of digits
before and after the decimal point in
the report:
Text Text
Date Date
Currency Currency
Boolean Boolean
Mapping Table and Dataset Field Types to Prism Data Source Field Types
When you enable a table for analysis or publish a dataset, Workday creates a Prism data source.
This table lists the field types Workday uses in a Prism data source:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 22/200
3/11/23, 9:33 PM Workday® Administrator Guide
Table or Dataset Field Type Prism Data Source Field Type Notes
Text Text
Boolean Boolean
Currency Currency
Integer Numeric
Long Numeric
Double Numeric
Related Information
Tasks
Change Dataset Field Types
Reference
Reference: Currency Format Requirements for External Data
The Next Level: Prism Analytics Best Practices
If a field value in a table or dataset is empty, it's considered a NULL value. When you enable a table for analysis or publish a dataset,
Workday replaces all NULL values with a default value in the Prism data source. Prism data sources, discovery boards, and reports
have no concept of NULL values.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 23/200
3/11/23, 9:33 PM Workday® Administrator Guide
The raw data from the source is missing values for a particular field.
The raw data from an external data source can't be parsed for the specified field type.
A Prism calculated field expression returns an empty or invalid result.
The dataset contains a Join stage and:
You configure an outer join (you include all rows from at least one of the dataset pipelines).
There's no match from 1 pipeline to the other.
If you're familiar with SQL, this is similar to an unjoined foreign key.
Boolean False
Date (Blank)
Note: If the date value in a table that is enabled for analysis is 0000-12-
31T23:59:59.999Z, then Workday displays inconsistent result.
Instance (Blank)
Multi-Instance (Blank)
Text (Blank)
Arithmetic calculations on numeric fields that involve a NULL return NULL. Example: 5 + NULL returns NULL.
Comparison operations that result in a Boolean field that involve a NULL return NULL. Example: 5 > NULL returns NULL.
Currency fields have 2 components to the field value (the currency code and currency value), and as a result, they handle NULLs a
little differently than numeric fields in arithmetic calculations.
Addition and subtraction calculations on Currency fields that involve a NULL return NULL. Example: TO_CURRENCY("5.00
USD") + NULL returns NULL.
Addition and subtraction calculations on Currency values that use different currency codes return NULL. Example:
TO_CURRENCY("5.00 USD") + TO_CURRENCY("5.00 EUR") returns NULL.
You can't use Currency fields in comparison operations. However, you can test for NULL values by using this syntax:
value IS NULL
value IS NOT NULL
For grouping fields, any NULL values result in their own group.
For summarization fields, NULL handling depends on the summarization type.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 24/200
3/11/23, 9:33 PM Workday® Administrator Guide
Average Returns the average of all valid numeric values in the group. It sums all values in
the provided expression and divides by the number of valid (not NULL) rows.
Count Returns the number of all rows in a group (counts all values, both NULL and non-
NULL).
Maximum Returns the greatest of all non-NULL values, and NULL if all values are NULL.
Minimum Returns the lowest of all non-NULL values, and NULL if all values are NULL.
Sum Returns the total of all non-NULL values, and NULL if all values are NULL.
Related Information
Concepts
Concept: Table and Dataset Field Types
Reference
Reference: Currency Format Requirements for External Data
A Prism calculated field is a user-created field that generates its values based on a calculation or condition, and returns a value for
each input row. Values are computed based on expressions that can contain values from other fields, constants, mathematical
operators, comparison operators, or built-in row functions.
Derive meaningful values from base fields, such as calculating someone's age based on their birthday.
Do data cleansing, such as substituting 1 value for another.
Compute new data values based on a number of input variables, such as calculating a profit margin value based on revenue
and costs.
Sometimes you need several steps to achieve the result that you want. You can use the result of a Prism calculated field in the
expressions of other Prism calculated fields, enabling you to define a chain of processing steps.
When you create a Prism calculated field, the inspector panel displays for the new field. To see a list of available functions, click the
Functions Library tab in the inspector panel.
You can extract and export the expressions used in dataset pipelines as CSV files from the View Dataset Lineage report.
Related Information
Concepts
Concept: Prism Expression Language
Tasks
Add a Prism Calculated Field to a Dataset
Reference
The Next Level: Prism Analytics Best Practices
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 25/200
3/11/23, 9:33 PM Workday® Administrator Guide
A data administrator can control what fields of a dataset are visible. Hidden fields aren't visible further along in the development
process. Example: if you hide a field in a stage in a pipeline, that field isn't visible in a later stage. Hidden fields aren't included in the
Prism data source of a published dataset, or in an imported dataset of a derived dataset.
Protect sensitive data. In some cases, you might want to hide fields to protect sensitive information. You can hide detail
fields, but still allow access to summary information. Suppose that you have a dataset containing employee salary
information. You might want to hide salaries per person, but still enable analysts to view average salary by department.
Hide unpopulated or sparse data fields. You might have fields in your raw data that didn't have any data collected. The data
collected might be too sparse to be valid for analysis. Suppose that a web application has a placeholder field for
comments, but it was never implemented on the website so the comments field is empty. Hiding the field prevents analysts
from using a field with mostly null values when they analyze the data.
Use a calculated field instead of the fields that it's based on. You might add a Prism calculated field to transform the
values of the raw data. You want your users to analyze the transformed values, not the raw values. Suppose that you have a
"return reason code" field where the reason codes are numbers (1, 2, and 3). You could transform the numbers to the actual
reason information (such as Didn't Fit, Changed Mind, and Poor Quality), so the data is more usable during analysis.
Hide Prism calculated fields that do interim processing. As you work on your dataset to cleanse and transform the data,
you might need to add interim Prism calculated fields to achieve a final result. These fields are necessary to do a
processing step, but aren't intended for final consumption. You can hide these working fields so they don't clutter later
stages or the dataset details.
Hide a field in the Manage Fields stage by unselecting the check box for a field. Although you can also hide a field in the inspector
panel, it's a best practice to hide fields in the Manage Fields stage.
Related Information
Concepts
Concept: Prism Analytics Data Management Workflow
Schedules for dataset integrations enable you to specify when, how often, and under what criteria to import data into a dataset.
Workday enables you to import data into a dataset immediately on an ad hoc basis or according to a preconfigured schedule.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 26/200
3/11/23, 9:33 PM Workday® Administrator Guide
As you create an integration schedule, consider these actions that you can perform on it:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 27/200
3/11/23, 9:33 PM Workday® Administrator Guide
Action Description
Change Schedule (recurring schedules only) Edit the run frequency (daily, monthly, weekly), start time, and
date range for the integration schedule. You can also change
to another scheduled recurring process.
Edit Environment Restrictions Select the environment in which you want the scheduled
integration to run.
Edit Scheduled Occurrence (recurring schedules only) Update the schedule date and time for one particular
occurrence of the scheduled request.
View All Occurrences (recurring schedules only) View all future occurrences of an integration schedule within
a specified range of dates and times.
Related Information
Tasks
Manage Dataset Integration Schedules
Steps: Create a Dataset with External Data (SFTP Server)
Steps: Create a Dataset Using Workday Data
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 28/200
3/11/23, 9:33 PM Workday® Administrator Guide
Manage Fields A Manage Fields stage is a type of stage that enables you to
view field changes, select fields, and edit fields.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 29/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Dataset Stages
When you view an error file for a table load, you might see these errors:
3000 The number of fields in the source doesn’t match the number of fields in the
wBucket schema. You use wBuckets when you load data into a table using the
REST API.
3002 Invalid data - Integer field. When the precision and scale of a Numeric field
indicates that the field should contain integer values only, you might see this error.
Example: Numeric(9,0)
4000 The number of characters for the Text field exceeds the maximum allowed (32,000
characters).
4003 The numeric value is too large for the defined precision and scale for the Numeric
field.
The error file has reached its maximum of 10 MB and has stopped
recording new error messages.
The number of characters for the entire row of data exceeds the maximum
allowed (500,000 characters).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 30/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Table Error File
Prerequisites
Security: Prism: Tables Create domain in the Prism Analytics functional area.
Context
Workday:
Uses the field information in the first file to define the table fields.
Loads the data in the files into the table.
Upload up to 100 files. All files must use the same schema.
Modify the field type that Workday guesses for each field. Example: You can change the field type from Numeric to Text, or
from Text to Multi-Instance.
Add other fields to the table schema.
Create a data change task to load data from files you upload into the new table.
Steps
12. (Optional) Click Add Field to add 1 or more fields. In the inspector panel for the field, configure the field attributes.
See Reference: Table Field Attributes.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 31/200
3/11/23, 9:33 PM Workday® Administrator Guide
Result
Workday creates the table and starts loading the data in the files into the table. On the View Table Details report, click the Activities
tab to view the data load progress. Refresh the page to get the most recent status.
Next Steps
To load data from a delimited file to the table again, create a data change task for the table.
If there were errors loading data into the table, download the error file from the data load on the Activities tab of the View Table
Details report.
Related Information
Concepts
Concept: Prism Analytics Data Management Workflow
Concept: Tables
Concept: Table and Dataset Field Types
Concept: Table Error File
Tasks
Parse External Data in a Table
Reference
Reference: WPA_ Fields
Reference: Supported File Formats for External Data in Tables and Datasets
Reference: Table Field Attributes
Reference: Naming Guidelines
Reference: External Data Limits
2020R1 What's New Post: Prism Analytics Tables
The Next Level: Prism Analytics Best Practices
Prerequisites
Security: Prism: Tables Create domain in the Prism Analytics functional area.
Context
You can create a table based on an existing Workday custom report by selecting a report as the schema source. Workday retains
only the fields with field types that tables currently support.
Steps
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 32/200
3/11/23, 9:33 PM Workday® Administrator Guide
Result
Workday creates an empty table using the schema defined in the report.
Next Steps
Create a data change task using the same report as the source to load data into the table from the report.
Related Information
Concepts
Concept: Creating Reports to Import into Tables and Datasets
Concept: Prism Analytics Data Management Workflow
Concept: Tables
Concept: Table and Dataset Field Types
Reference
Reference: WPA_ Fields
Reference: Table Field Attributes
Reference: Naming Guidelines
The Next Level: Prism Analytics Best Practices
Prerequisites
Security:
Context
You can create a table based on an existing dataset or other table by selecting an existing table or dataset as the schema source.
Workday defines the new table schema based on the output schema of the existing table or dataset.
If the existing dataset includes a Double field, consider converting it to a Numeric field in the dataset before creating the table.
When you create a table from an existing table or dataset, you can:
Steps
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 33/200
3/11/23, 9:33 PM Workday® Administrator Guide
Define some field constraints that ensure the accuracy and reliability of the data in the table, such as Required or
Use as External ID.
Change other field attributes, such as Digits Before and Digits After, based on the field type.
Result
Workday creates an empty table using the schema that you defined.
Next Steps
Related Information
Concepts
Concept: Prism Analytics Data Management Workflow
Concept: Tables
Concept: Table and Dataset Field Types
Reference
Reference: Table Field Attributes
Reference: Naming Guidelines
Reference: WPA_ Fields
Prerequisites
Security: Prism: Tables Create domain in the Prism Analytics functional area.
Context
You can create a table by manually defining each field in the table schema. When you create a table manually, the table is empty.
You can create a data change task to load data into the table.
Steps
Next Steps
Related Information
Concepts
Concept: Prism Analytics Data Management Workflow
Concept: Tables
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 34/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Security: Prism Datasets: Create domain in the Prism Analytics functional area.
Context
You can create a dataset using external data by transferring data from an SFTP server. You might want to create a dataset that
gets its data from an external server when the server regularly collects or adds new data. You configure how often the dataset gets
new data from the server.
For integration runs that transfer data from the SFTP server to succeed:
Each file from the server should be less than 1 GB compressed (less than 10 GB uncompressed approximately).
Steps
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 35/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
To specify a port number, add it to the end of the domain name or IP address.
If you don't specify a port number, Workday uses port 22.
Directory (Optional) The directory on the server that contains the files. Directory names
are case-sensitive. Include a leading slash (/) only for a full path, not a relative
path.
Use Temp File Writes the imported data to a temporary file in Workday with a randomly
generated name. After the data import is complete, Workday automatically
renames the file to the correct name.
You might want to enable this option if the data import takes a very long time
and might not finish before the next scheduled time to import data from the
same server.
Authentication Method and Details Select the type of security authentication that the SFTP server uses:
Option Description
Delete After Retrieval Deletes the files on the SFTP server after the data is imported into the
dataset. If Workday is unable to delete the files from the SFTP server, the data
retrieval fails.
You can transfer files that are compressed or not. For compressed files,
Workday only supports gzip compression.
Decrypt Using If you want to decrypt the imported files using Pretty Good Privacy (PGP),
select a PGP Private Key Pair.
6. (Optional) In the Environment Restrictions section, at the Restricted To prompt, select the environment in which you want
to use the settings defined in the Transport section.
If you leave this option empty, Workday applies the transport settings to each environment in which the dataset integration
runs. When a dataset integration runs in a particular environment, such as Implementation or Production, the transport
settings only work if the Restricted To option matches the current environment. When the current environment and the
configured environment don’t match, the dataset integration fails and retrieves no files from the SFTP server. You might
want to restrict the transport settings to a particular environment to avoid inadvertently transferring test data to a non-test
endpoint.
Example: You create the dataset in an Implementation environment and select Implementation in Restricted To. Later, you
migrate this dataset to a Production environment and the next time the dataset integration runs, the integration fails. To
ensure that the dataset integration runs successfully in the Production environment, edit the dataset integration details and
either clear the Restricted To option or change it to Production.
7. On the Create Dataset Retrieval - Schedule Request Type task, in Run Frequency, specify how often to import data from
the SFTP server.
If you're importing the data once in the future or on a schedule, specify the criteria for either on the Create Dataset Retrieval
- Schedule Integration task.
After the dataset is created, you can run the integration to bring in data to the dataset on an ad hoc basis. From the related
actions menu of the View Dataset Details report, select Dataset > Run Integration Now.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 36/200
3/11/23, 9:33 PM Workday® Administrator Guide
Note: You can't bring data into the same dataset at times that overlap with each other.
8. Specify a unique name for the dataset.
The dataset name is what displays in the Data Catalog. You can change the name when you create, edit, or copy the
dataset.
9. (Optional) Change the dataset API name. Workday automatically selects an API name based on the dataset name you enter,
modifying it to make it meet the name requirements. You can’t change this name after you finish creating the dataset.
10. (Optional) Create or edit 1 or more tags to organize the dataset in the Data Catalog.
11. (Optional) Add a description to help others understand the data in this dataset. You can change the description when you
edit the dataset.
12. Select how you want to update the data in the dataset when it receives new data from an integration run.
Option Description
Replace Workday deletes the existing data in the dataset and replaces it with the data it
imports from the SFTP server.
Append Workday keeps the existing data in the dataset and adds to it the new data it
imports from the SFTP server.
Workday imports all data in all files during every integration run. Append mode is
different than incrementally updating data in a dataset. Whether the data in the
dataset gets updated incrementally depends on if the SFTP server contains only
incremental updates since the last integration run.
All files in every integration run must use the same parsing options
(including the header row configuration) that were used during the first
integration run.
The fields must be in the same order in all files in every integration run.
If the schema in a subsequent integration run contains new fields, the new
fields must be located at the end of all previous fields.
If the file schema in a subsequent integration run deletes one or more fields,
the deleted fields must be at the end.
Ensure that if the schema deletes fields, no future schema adds new fields,
otherwise the integration run will fail. To ensure that all future integrations
run successfully, always keep existing fields in the schema and only add
new fields. If necessary, you can include empty (NULL) values in existing
fields.
Note: An integration fails when the schema of the new data doesn't contain a field that currently exists in the dataset, and
the removed field is used in a stage in the dataset. Example: If the dataset includes a Manage Fields stage and the
integration brings in data that is missing a field in the dataset, the integration fails. That's because the Manage Fields stage
works on every field in the dataset.
13. Click Save.
Workday creates the dataset, but it has no data until Workday imports the data and fields from the SFTP server during the
first integration run. Depending on when you scheduled the data to import, the dataset might be empty for some time.
Workday also adds 2 fields that provide information about each integration run. See Reference: WPA_ Fields.
14. (Optional) Change the name of your integration schedule. See Manage Dataset Integration Schedules.
15. Access the Data Catalog report, right-click the dataset you just created, and select Edit.
16. Configure how to parse the data in the files from the SFTP server.
See Parse External Data in a Dataset.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 37/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Dataset Workspace
Concept: Datasets
Concept: Dataset Stages
Concept: Dataset Pipelines
Reference
Reference: Supported File Formats for External Data in Tables and Datasets
Reference: WPA_ Fields
Reference: Naming Guidelines
Reference: External Data Limits
The Next Level: Prism Analytics Best Practices
Prerequisites
Security: Prism Datasets: Create domain in the Prism Analytics functional area.
Context
You can create a dataset using external data by uploading a file. You might want to create a dataset by uploading a file when the
data in the file is less likely to change over time.
When you create a dataset by uploading a file, the source data in the dataset remains the same over time. However, you can
change the data in the dataset later by uploading a new file to the dataset. See Upload a New File to a Dataset.
Steps
Related Information
Concepts
Concept: Dataset Workspace
Concept: Datasets
Concept: Dataset Stages
Concept: Dataset Pipelines
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 38/200
3/11/23, 9:33 PM Workday® Administrator Guide
Tasks
Upload a New File to a Dataset
Reference
Reference: Supported File Formats for External Data in Tables and Datasets
Reference: Naming Guidelines
Reference: External Data Limits
The Next Level: Prism Analytics Best Practices
Prerequisites
Security: Prism Datasets: Create domain in the Prism Analytics functional area.
Context
You can create a dataset using Workday data. You do this by creating a dataset using an existing Workday custom report as the
source for the dataset.
You configure how often the dataset gets new data from the report.
Workday retains only the fields with field types that datasets currently support.
Steps
On the Create Dataset Retrieval - Configure Report Retrieval task, you configure how to import the data from the custom
report.
3. Select a Custom Report that has the data you want to import into this dataset.
4. In the Report Criteria table, select values for the report prompts, if applicable.
Workday filters the report data with the specified values as the report runs and before importing the data into the dataset.
As you complete this step, consider:
Option Description
Value Type This option affects how Workday determines the value for this field prompt:
Specify Value. Workday uses the same value that you specify here
each time it runs the report to import data into the dataset.
Value Workday uses the value or field you select here to filter the data in the report.
5. (Optional) In the Environment Restrictions section, at the Restricted To prompt, select the environment in which you want
to use the settings defined in the Transport section.
If you leave this option empty, Workday applies the transport settings to each environment in which the dataset integration
runs. When a dataset integration runs in a particular environment, such as Implementation or Production, the transport
settings only work if the Restricted To option matches the current environment. When the current environment and the
configured environment don’t match, the dataset integration fails and retrieves no data from the specified custom report.
You might want to restrict the transport settings to a particular environment to avoid inadvertently transferring test data to
a non-test endpoint.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 39/200
3/11/23, 9:33 PM Workday® Administrator Guide
Example: You create the dataset in an Implementation environment and select Implementation in Restricted To. Later, you
migrate this dataset to a Production environment and the next time the dataset integration runs, the integration fails. To
ensure that the dataset integration runs successfully in the Production environment, edit the dataset integration details and
either clear the Restricted To option or change it to Production.
6. On the Create Dataset Retrieval - Schedule Request Type task, in Run Frequency, specify how often to import data from
the custom report.
If you're importing the data once in the future or on a schedule, specify the criteria for either on the Create Dataset Retrieval
- Schedule Integration task.
After the dataset is created, you can run the integration to bring in data to the dataset on an ad hoc basis. From the related
actions menu of the View Dataset Details report, select Dataset > Run Integration Now. You can't bring data into the same
dataset at times that overlap with each other.
Note: An integration fails to bring in new data from the custom report when the report schema doesn't contain a field that
currently exists in the dataset, and the removed field is used in a stage in the dataset. Example: If the dataset includes a
Manage Fields stage and the integration brings in data that is missing a field in the dataset, the integration fails. That's
because the Manage Fields stage works on every field in the dataset.
7. Specify a unique name for the dataset.
The dataset name is what displays in the Data Catalog. You can change the name when you create, edit, or copy the
dataset.
8. (Optional) Change the dataset API name. Workday automatically selects an API name based on the dataset name you enter,
modifying it to make it meet the name requirements. You can’t change this name after you finish creating the dataset.
9. (Optional) Create or edit 1 or more tags to organize the dataset in the Data Catalog.
10. (Optional) Add a description to help others understand the data in this dataset. You can change the description when you
edit the dataset.
11. Select how you want to update the data in the dataset when it receives new data from an integration run.
Option Description
Replace Workday deletes the existing data in the dataset and replaces it with the data it
imports from the custom report.
Append Workday keeps the existing data in the dataset and adds to it the new data it
imports from the custom report.
Workday imports all data in the report during every integration run, resulting in
duplicate data in the dataset. Select append mode for a custom report dataset
when you want a snapshot of the custom report data to maintain history in the
dataset for trending use cases.
Note: Ensure that you don’t change the Column Heading Override XML Alias values
in the custom report definition. Workday uses these values to map fields from the
custom report into the dataset.
Workday creates the dataset, but it has no data until Workday runs the report and then imports the data and fields from the
report during the first integration run. Depending on when you scheduled the data to import, the dataset might be empty for
some time. Workday also adds 2 fields that provide information about each integration run. See Reference: WPA_ Fields.
13. (Optional) Change the name of your integration schedule. See Manage Dataset Integration Schedules
14. Access the Data Catalog report, right-click the dataset you just created, and select Edit.
15. (Optional) Add a Stage to a Dataset.
You can add only some stage types to base datasets.
16. (Optional) Add a Prism Calculated Field to a Dataset.
You can add a Prism calculated field to any stage.
Related Information
Concepts
Concept: Dataset Workspace
Concept: Datasets
Concept: Dataset Stages
Concept: Dataset Pipelines
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 40/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Security:
Context
When you first create a derived dataset, Workday creates the Primary Pipeline. Import other datasets into the derived dataset so
you can blend data together.
You can add a stage to any pipeline in the dataset. However, some stages, such as the Join stage, can only be added to the Primary
Pipeline.
When you add a Join or Union stage to the Primary Pipeline, you must select another pipeline in the derived dataset to blend with
the Primary Pipeline. Workday uses the last stage of that pipeline as the input to the Join or Union stage.
Steps
Related Information
Concepts
Concept: Dataset Workspace
Concept: Datasets
Concept: Dataset Stages
Concept: Dataset Pipelines
Reference
Reference: Naming Guidelines
The Next Level: Prism Analytics Best Practices
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 41/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
A derived dataset is based on 1 or more existing tables or datasets. Use derived datasets to blend and combine together data from
multiple sources. In order to blend and combine data from multiple sources, you need to import multiple tables or datasets into the
derived dataset. When you first create a derived dataset, you base it on an existing table or dataset. Afterward, you must import
other tables or datasets into the derived dataset.
When you import a table or dataset into a derived dataset, Workday creates a new pipeline. The pipeline name is the same as the
table or dataset name you import. You can add stages to the new pipeline.
Once a derived dataset has multiple tables or datasets imported into it, you can add a stage, such as a Join stage, to the Primary
Pipeline to blend data with any other pipeline.
Steps
Result
The Pipelines panel displays the new pipeline with the first stage being an Import stage. The pipeline name is the same as the
table or dataset you imported.
Next Steps
(Optional) Add a stage, such as a Join stage, that blends together data from the Primary Pipeline and the pipeline you
added.
Related Information
Tasks
Steps: Create a Derived Dataset
Prerequisites
Security: Prism Datasets: Manage domain in the Prism Analytics functional area.
Context
You can manage how you set up integration schedules for base datasets created from:
SFTP
Custom reports
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 42/200
3/11/23, 9:33 PM Workday® Administrator Guide
Steps
Option Description
Catch Up Behavior Select how many times the scheduled integration runs
after maintenance issues cause errors.
Option Description
Trigger on Status Select the status of the scheduled future process that
causes the dataset integration to run.
Time Delayed Configuration (Optional) Specify the number of days, hours, or minutes
to delay running the dataset integration after the trigger.
You might want to delay integration to review the latest
source files.
5. (Optional) Change the name of the schedule in the Request Name field. Workday assigns a name to the schedule based on
the name of the dataset and prepends Dataset Integration Schedule: to the name.
6. (Optional) Perform actions such as transferring ownership of the schedule or editing 1 scheduled occurrence.
a. Access the View Integration Details report for the dataset.
b. Find the integration schedule in the Request Name column on the Schedules tab.
c. From the related actions menu of the integration schedule, select Schedule Future Process and then the desired
action.
Result
Workday imports data into the dataset based on the criteria you specified.
You can view the status of all scheduled integration processes in the Process Monitor and Scheduled Future Processes reports.
The status includes the date and time of the last successful integration. The last successful integration date informs you about the
freshness of the data brought into the dataset. Example: If the last successful integration date is 1 week ago, but your integration
schedule is set to run daily, this discrepancy could indicate a failure in the integration process.
Related Information
Concepts
Concept: Dataset Integration Schedules
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 43/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
When you create a dataset by uploading a file, the data in that dataset stays the same over time. If you have a new version of the
source file, you can upload it to the same dataset. You might want to upload a new file to update an existing dataset instead of
creating a new dataset. When you update an existing dataset, you maintain any relationships with datasets that depend on the
existing dataset.
When you upload a new file, all existing data is replaced with the data in the new file.
Sometimes, the fields in the source file might change, also known as a schema change. Fields might be added, deleted, or moved.
When the schema changes and the new file is imported into the dataset, you must edit the dataset and save it to incorporate the
changes in the dataset. If you don’t save the dataset, it'll continue to use the old schema definition. The next time the dataset is
published, you might get inconsistent data in the Prism data source.
Note: Uploading a file fails when the schema of the new file doesn't contain a field that currently exists in the dataset, and the
removed field is used in a stage in the dataset. Example: If the dataset includes a Manage Fields stage and you try to upload a file
that is missing a field in the dataset, the upload fails. That's because the Manage Fields stage works on every field in the dataset.
Steps
1. Access the View Dataset Details report for the dataset you want to update with new data.
2. Click Upload File.
3. In the confirmation dialog, click Upload.
4. Navigate to and select the local file.
5. If Workday successfully uploads the file, navigate to the Edit Dataset Transformations task.
If the schema changed, the Save button is active.
6. If the Save button is active, click Save to apply schema changes.
Result
Workday replaces the data in the dataset with the data in the file you uploaded. It updates the fields in the dataset if the schema in
the uploaded file is different.
Next Steps
Verify that no schema changes broke any Prism calculated fields, stages, derived datasets, or Workday reports that depend on the
dataset whose schema changed.
Related Information
Concepts
Concept: Dataset Schema Changes
Tasks
Steps: Create a Dataset with External Data (Upload a File)
Reference
Reference: External Data Limits
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 44/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
You can view how much Prism data your organization has used on your tenant. You might want to view your Prism data usage to
ensure you're in compliance with your purchase agreement with Workday.
Reportable Rows. This value summarizes all rows in published datasets and tables enabled for analysis in your tenant,
including the datasets and tables that you don't have permission on.
Table and dataset usage. This grid lists all tables and datasets that you have permission on, and includes the disk space
used and number of reportable rows per table and dataset.
Steps
Prerequisites
Context
When you bring in data and transform it in Prism Analytics, you can create complex workflows containing multiple tables and
datasets. You create a derived dataset by importing a table or dataset on which the derived dataset is based. The derived dataset
depends on the table or dataset you import into it. You can visually see these dependencies by viewing the lineage for a table or
dataset.
Viewing the lineage enables you to see dependencies, and to trace the origin of a derived dataset back to its tables and base
datasets. The lineage gives you insight into the potential consequences of changes you make to your data (impact analysis).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 45/200
3/11/23, 9:33 PM Workday® Administrator Guide
Steps
On the View Table Lineage or View Dataset Lineage report, the graph displays dependencies in both directions from the
selected object where applicable:
Upstream and downstream dependencies. When you view the lineage of a derived dataset, the graph displays the
datasets imported into the derived dataset, and any other derived datasets that import this derived dataset.
Downstream dependencies only. When you view the lineage of a table or base dataset, the graph displays any
derived datasets that import this table or base dataset.
Related Information
Tasks
View Dataset Dependencies
Reference
The Next Level: Prism Performance and Troubleshooting Tips
Prerequisites
Context
When you bring in data and transform it in Prism Analytics, you can create complex workflows containing multiple transformation
stages across various datasets. You can visually see the different stages and datasets a field passed through up that point by
viewing the lineage for the field.
Steps
On the View Field Lineage report, the graph displays all transformations involving data in the field up to that point across all
datasets you have access to.
When you access the View Field Lineage report, you start at the field that you selected, the root node, and you can:
Select a node to view details in the inspector panel on the stage represented in that node.
Control your view of the lineage by expanding and collapsing chevrons in the graph.
View the changes across different datasets and tables, differentiated by a colored outline encompassing all nodes
in a dataset.
Navigate through decision nodes, where there’s a divergence in the lineage.
View all nodes in datasets and tables that you have at least View Table or View Transformations permissions for.
Trace the lineage of all dependent fields for calculated fields.
Related Information
Concepts
Concept: Dataset Field Origin
Concept: Field Lineage
Reference
2022R2 What's New Post: Field Lineage for Prism Analytics
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 46/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
You create a derived dataset by importing a table or dataset on which the derived dataset is based. The derived dataset depends
on the table or dataset you import into it. You can view these dataset dependencies.
Steps
1.8.16 | Reference: Supported Date Formats for External Data in Tables and Datasets
External data that you bring into the Data Catalog might contain fields with date or time values. Workday only supports some date
formats. How Workday uses the date formats depends on the object you create:
Table. When you define a Date field in the schema of a table, you can specify any of the supported date formats. The date
values in the external data must match the specified date format in order for the row to be valid and loaded into the table.
Base dataset. If Workday recognizes the format of a date field in the external file, it automatically assigns the Date field type
when parsing the file.
Workday supports these date formats as well as any shortened versions of them:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 47/200
3/11/23, 9:33 PM Workday® Administrator Guide
yyyy-MM-dd'T'HH:mm:ssZZ
yyyy-MM-dd'T'HH:mm:ss
MM/dd/yy h:mm:ss a ZZ
MM/dd/yy h:mm:ss a
MM/dd/yy H:mm:ss ZZ
MM/dd/yy H:mm:ss
yy-MM-dd h:mm:ss a ZZ
yy-MM-dd h:mm:ss a
yy-MM-dd H:mm:ss ZZ
yy-MM-dd H:mm:ss
yyyy-MM-dd HH:mm:ss.SSS
Date only
yyyy-MM-ddZZ
yy-MM-dd
yyyy-MM-dd
MM/dd/yy
Time only
'T'HH:mm:ssZZ
'T'HH:mm:ss
HH:mm:ssZZ
HH:mm:ss
Related Information
Concepts
Concept: Table and Dataset Field Types
Tasks
Change Dataset Field Types
Reference
TO_DATE
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 48/200
3/11/23, 9:33 PM Workday® Administrator Guide
API Name
Field Name This is the display name. This is the display name.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 49/200
3/11/23, 9:33 PM Workday® Administrator Guide
Can't start with a space or with Can’t start with ( or with WPA_.
WPA_.
Must be unique within the
dataset.
Table field names are case insensitive.
Must be unique in the Data Catalog. Data change task names are case insensitive.
Can contain a maximum of 255 characters.
Can include any character, including multi-byte characters, as long as all characters are UTF-8 encoded.
Can't start with a space or WPA_.
Can’t end with a space.
Must be unique in the Data Catalog. Table API names are case insensitive.
Can contain a maximum of 255 characters.
Can only include alphanumeric and underscore characters.
Must start with a letter.
Must end with an alphanumeric character.
Can't start with WPA_.
Connection Names
SFTP connection names:
Must be unique.
Are case insensitive.
Can contain a maximum of 255 characters.
Can include any character, including multi-byte characters, as long as all characters are UTF-8 encoded.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 50/200
3/11/23, 9:33 PM Workday® Administrator Guide
wBucket Names
Must be unique.
Can contain a maximum of 255 characters.
Can only include alphanumeric and underscore characters.
Must start with a letter.
Can't end with an underscore character.
Can't start with WPA_.
1.8.18 | Reference: Supported File Formats for External Data in Tables and Datasets
To bring in non-Workday data as a table or dataset, Workday parses the data into records (rows) and fields. All characters must be
UTF-8 encoded.
Format Description
Delimited Text
A delimited file is a plain text file format for describing tabular data. Comma-
separated value (CSV) files are the most common delimited files. It refers to any
file that:
Records are separated by line breaks, and fields within a line are separated by a
delimiter (usually a comma character).
If the delimiter also exists in the field values, it must be escaped. Workday
supports single character escapes (such as a backslash), as well as enclosing
field values in double quotes (as is common with CSV files).
External data that you bring into a table or dataset might contain fields with currency values. If Workday recognizes the format of a
Currency field, it automatically assigns the Currency field type.
For Workday to recognize a single field value as valid currency data, it must meet these requirements:
If a Currency field contains any value that doesn't meet these requirements, Workday treats the value as NULL.
Example: Workday recognizes these single data values as valid currency data:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 51/200
3/11/23, 9:33 PM Workday® Administrator Guide
3000.00 USD
$3000.00 USD
USD 3000.00
USD $3000.00
-$3,000.00 USD
($3,000.00) USD
Related Information
Concepts
Concept: Table and Dataset Field Types
Tasks
Change Dataset Field Types
Add a Prism Calculated Field to a Dataset
Workday recognizes specific characters as symbols to represent part of a date format when you create and edit tables and
datasets. This section describes the symbols to use and the patterns use them in when you define your date format. The count and
order of the symbols determine the date format.
Workday treats any characters in the pattern that aren't in the ranges of a-z or A-Z as quoted delimiter text. Example: Workday
treats the slash (/) and colon (:) characters delimiter text even if they aren't escaped with single quotes.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 52/200
3/11/23, 9:33 PM Workday® Administrator Guide
G era text AD
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 53/200
3/11/23, 9:33 PM Workday® Administrator Guide
Z time zone offset/id zone -0800; -08:00; 'Z' outputs offset without
America/Los_Angeles a colon, 'ZZ' outputs the
offset with a colon, 'ZZZ'
or more outputs the zone
ID.
When you create any table or a base dataset that uses an integration, Workday automatically creates extra fields in the table or
dataset. These fields help you to uniquely identify rows in the table or dataset from different integration runs.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 54/200
3/11/23, 9:33 PM Workday® Administrator Guide
WPA_LoadID
This field returns a value of type Text containing a unique identifier of the
integration run or data load activity that imported the current row of data into the
dataset or table.
WPA_LoadTimestamp
This field returns a value of type Date (to the millisecond) containing the date and
time of the integration run that imported the current row of data into the dataset or
table.
WPA_RowID
This field returns a value of type Text containing a unique row identifier for each
row in a data load activity or integration run.
WPA_UpdateID
This field returns a value of type Text containing a unique identifier of the data load
activity that updated the current row of data in the table.
WPA_UpdateTimestamp
This field returns a value of type Date (to the millisecond) containing the date and
time of the data load activity that updated the current row of data in the table.
You can’t modify or delete these fields, but you can hide them. Use these fields with the other fields to uniquely identify rows of
data in the table or dataset from multiple integrations.
You can also use these fields to group data together from a single data load or integration. Example: you can create a Group By
stage and group on the WPA_LoadID field and Count the number of rows from each integration run.
Related Information
Tasks
Steps: Create a Dataset with External Data (SFTP Server)
Steps: Create a Dataset Using Workday Data
Prerequisites
Security:
Prism: Tables Create domain in the Prism Analytics functional area when creating a table.
Any of these security requirements when editing a table:
Prism Datasets: Owner Manage domain in the Prism Analytics functional area.
Prism Datasets: Manage domain in the Prism Analytics functional area.
Table Editor permission on the table.
Table Owner permission on the table.
Can Insert Table Data permission on the table.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 55/200
3/11/23, 9:33 PM Workday® Administrator Guide
Context
When you load a delimited file into a table, you must define how Workday parses the data. You define the parsing options on the
Edit Parsing Options step when you load data into a table, such as creating a table by uploading a file, or when adding more rows
to an existing table.
Workday supports delimited files that are RFC 4180-compliant. For more information, see RFC 4180.
Steps
1. Access the Edit Parsing Options step for loading data into a table.
2. Configure the parsing options.
As you complete this task, consider:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 56/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
Row Delimiter
Specifies the single character that separates rows (or records) in your source
data files.
In most delimited files, rows are separated by a new line, such as the line feed
character, carriage return character, or carriage return plus line feed. Line feed
is the standard new line representation on UNIX-like operating systems. Other
operating systems (such as Windows) might use carriage return individually,
or carriage return plus line feed. Selecting Any New Line causes Workday to
recognize any of these representations of a new line as the row delimiter.
Field Delimiter
Specifies the single character that separates the fields (or columns) of a row
in your source data files. Comma is the most common field delimiter.
Field Names
Specifies the default name of each field. You can change the field names
after you finish defining the parsing options.
Workday automatically treats the first line in each source file as a header row
instead of as a row of data. If you don't want to use the first line as names for
your fields, clear Use values from first row.
Escape Character
Specifies the single character used to escape the Quote Character or another
instance of the Escape Character when a Quote Character is specified.
Workday reads an escape character as data only if it's escaped with another
escape character.
If your data values contain quote characters as data, those characters must
be escaped and the entire field value must be enclosed with the Quote
Character. If not, then Workday assumes that the quote character denotes a
new field.
Quote Character
The character that encloses a single field value, if any.
Some delimited files use the quote character to enclose individual data
values. The quote character is typically the double quote character (").
If a field value contains a field delimiter as data, then the field value must be
enclosed in the Quote Character, otherwise Workday assumes that the field
delimiter denotes a new field.
If a field value contains the quote character as data, then the field value must
be enclosed in the Quote Character and it must be escaped, either by the
Escape Character or another quote character.
If a field value contains a row delimiter (such as a new line character) as data,
then the field value must be enclosed in the Quote Character.
weekly special
wine, beer, and soda
"2 for 1" or 9.99 each
If the field delimiter is a comma, the quote character is a double quote, and
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 57/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
the escape character is a double quote, then a correctly formatted row in the
source data looks like:
"weekly special", "wine, beer, and soda", """2 for 1"" or 9.99 each"
Comment Character
Specifies the character that represents a comment at the beginning of a line
of text. Workday ignores every line in the external file that starts with the
comment character. Example: Select # as the Comment Character to ignore
lines that start with #.
When the Comment Character is empty, Workday reads all lines as rows of
data.
Rows to ignore
Specifies the number of lines at the beginning of the file to ignore when
reading the source file. To use this with the Use values from first row option,
ensure that the line containing the field names is visible and is the first
remaining line.
Jagged Rows Select these options when the schema of the source file isn't an exact match
of the table schema, and you want Workday to ignore any missing or extra
fields at the end of the file schema.
Field Options These options control how to handle whitespace characters in Text fields.
Related Information
Tasks
Steps: Create a Table by File Upload
Reference
Reference: Naming Guidelines
Prerequisites
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 58/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prism Datasets: Owner Manage domain in the Prism Analytics functional area.
Prism Datasets: Manage domain in the Prism Analytics functional area.
Table Editor permission on the table.
Table Owner permission on the table.
Table Schema Editor permission on the table.
Context
You can edit a table by changing the table display name or changing the schema.
You can change the table schema by adding fields, deleting fields, or changing field attributes, such as the field type. However, you
can only change field attributes when the table contains no data.
Steps
Related Information
Concepts
Concept: Tables
Reference
Reference: Naming Guidelines
When you add or edit a field in a table in the Data Catalog, you define these attributes:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 59/200
3/11/23, 9:33 PM Workday® Administrator Guide
Display Name You can change this name at any time. The name must conform to name
validation rules.
API Name The API name must be unique in the table. Workday automatically selects
an API name based on the field name you enter, modifying it to make it
meet the name requirements. Click Change to change the API name. You
can’t change the API name after you save the table.
Field Type Select the field type that the values in this field must match to be
recognized as valid data. You need to configure additional field attributes
for some field types you select.
Date Format (Required for Date fields) Select the date format that the values in this
field must match to be recognized as valid date data.
Digits Before and Digits After (Required for Numeric fields) Enter the maximum number of digits before
and after the decimal point that the values in this field can have to be
recognized as valid numeric data. The sum of these 2 options must be
less than or equal to 38.
Business Object (Required for Instance and Multi-Instance fields) Select the business
object to associate with the values in this Instance field.
Description (Optional) Add a helpful field description that explains the meaning and
data value characteristics of the field.
Required Specifies that the field must contain data. Make a field required to ensure
it doesn’t contain a NULL value when you insert or update data in the
table. When you insert or update data in a table and this field is NULL,
Workday rejects the row and instead includes it in the error file.
Default Value Use the Default Value to define a value for a field if the uploaded source
file schema doesn't include that field. When the source file schema
doesn't include a field, Workday uses the default value for all rows in the
source file.
Note: The Default Value is only used when the source file schema is
missing a field, not when a particular field value is NULL.
Use as External ID Use this attribute to mark a single field in a table as a key. Specify a field
as the external ID when the values in the field uniquely identify each row
from its source.
Define a field as the external ID if you want to update or delete data in the
table based on data in an external file. This attribute is similar to a primary
key in a relational database.
Ensure that each field value in the external ID field is unique. If the
field values aren’t unique, you'll get unexpected results. Workday
doesn’t enforce the uniqueness.
You can’t define a default value for fields used as an external ID.
The field value must come from the external source and can’t be
NULL.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 60/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Tables
Reference
Reference: Naming Guidelines
Prerequisites
Base dataset using external data (from uploading a file or connecting to a server) exists in the Data Catalog.
Security:
Prism Datasets: Create domain in the Prism Analytics functional area when creating a dataset.
Any of these security requirements when editing an existing dataset:
Prism Datasets: Manage domain in the Prism Analytics functional area.
Dataset Editor permission on the dataset.
Dataset Owner permission on the dataset.
Context
When you bring external data into a dataset, you must describe the source data in a tabular format. You do this by describing how
to parse the data.
Delimited records are separated by line breaks, and fields within a line are separated by a special character called the delimiter
(usually a comma or tab character). If the delimiter also exists in the field values, it must be escaped. Datasets support single
character escapes (such as a backslash), as well as enclosing field values in double quotes.
Steps
1. Access the Edit Dataset Transformations task for a base dataset using external data.
2. Edit the Parse stage.
As you complete this task, consider:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 61/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
Row Delimiter
Specifies the single character that separates rows (or records) in your source
data files.
In most delimited files, rows are separated by a new line, such as the line feed
character, carriage return character, or carriage return plus line feed. Line feed
is the standard new line representation on UNIX-like operating systems. Other
operating systems (such as Windows) might use carriage return individually,
or carriage return plus line feed. Selecting Any New Line causes Workday to
recognize any of these representations of a new line as the row delimiter.
Field Delimiter
Specifies the single character that separates the fields (or columns) of a row
in your source data files. Comma and tab are the most common field
delimiters.
Field Names
Specifies the default name of each field. You can change the field names in
the Parse stage after you finish defining the parsing options.
Workday automatically treats the first line in each source file as a header row
instead of as a row of data. If you don't want to use the first line as names for
your fields, deselect the Field Names check box.
Escape Character
Specifies the single character used to escape the Quote Character or another
instance of the Escape Character when a Quote Character is specified.
Workday reads an escape character as data only if it's escaped with another
escape character.
If your data values contain quote characters as data, those characters must
be escaped and the entire field value must be enclosed with the Quote
Character. If not, then Workday assumes that the quote character denotes a
new field.
For comma-separated values (CSV) files, it's common practice to escape field
delimiters by enclosing the entire field value within double quotes. If your
source data uses this convention, then you should specify a Quote Character.
Quote Character
Some delimited files use the quote character to enclose individual data
values. The quote character is typically the double quote character (").
If a field value contains a field delimiter as data, then the field value must be
enclosed in the Quote Character, otherwise Workday assumes that the field
delimiter denotes a new field.
If a field value contains the quote character as data, then the field value must
be enclosed in the Quote Character and it must be escaped, either by the
Escape Character or another quote character.
If a field value contains a row delimiter (such as a new line character) as data,
then the field value must be enclosed in the Quote Character and Field values
contain new lines must be selected.
weekly special
wine, beer, and soda
"2 for 1" or 9.99 each
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 62/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
If the field delimiter is a comma, the quote character is a double quote, and
the escape character is a backslash, then a correctly formatted row in the
source data looks like:
Rows to ignore
Specifies the number of lines at the beginning of the file to ignore when
reading the source file while creating and publishing the dataset. To use this
with the From First Table Row option, ensure that the line containing the field
names is visible and is the first remaining line.
When enabled, Workday reads the new line characters inside quote
characters as part of the field value instead of as a row delimiter. Workday
interprets any row delimiter character outside of quote characters as a new
record.
Enabling this option might impact the time to publish a dataset if Workday
reads very large source files.
Note that you might get unexpected results if you enable this option and the
source file has malformed data (such as when a field value has either an
opening or closing quote character, but not both). Try to ensure that your
source data is well formed when using this option.
Trim trailing and leading Select this check box if you want to remove whitespace characters at the
whitespace characters in Text fields beginning and end of Text fields.
Related Information
Tasks
Add a Stage to a Dataset
Steps: Create a Dataset with External Data (SFTP Server)
Steps: Create a Dataset with External Data (Upload a File)
Reference
Reference: Naming Guidelines
Examples
Example: Bring in International-Formatted Numeric Fields
Prerequisites
Context
You can transform data in a dataset by adding Prism calculated fields to the dataset. Prism calculated fields change the number of
fields in a dataset, they don't change the number of records.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 63/200
3/11/23, 9:33 PM Workday® Administrator Guide
Component Description
Expression The expression describes a processing step that you want to perform on the data
in other fields in the dataset. Expressions can include:
Field type The expression determines the return value field type.
If the expression includes a function, then the return value of the function
determines the field type.
If the expression doesn't include any function, then the operator determines the
field type:
You might need to create several Prism calculated fields to achieve the result you want. You can use the result of a Prism
calculated field in the expressions of other Prism calculated fields to define a chain of processing steps.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 64/200
3/11/23, 9:33 PM Workday® Administrator Guide
Convert a field type to another field Change an Integer field type to a Long field type, so that you can use the
type. EPOCH_MS_TO_DATE function on it. Example:
EPOCH_MS_TO_DATE(TO_LONG([Date in MS]))
Perform an arithmetic calculation. Calculate the net profit based on the revenue and expenses. Example:
[Revenue] - [Expenses]
([Sale]/[Total Revenue])*100
Extract values from a different field. Extract the currency codes from a currency field using the EXTRACT_CODE
function. Example:
EXTRACT_CODE([Revenue])
Combine the values from 2 Text fields Combine separate fields consisting of Last Name and First Name into 1 field using
into 1 Text field. the CONCAT function. Example:
Test for a particular condition. Test whether the year is between 2019 and 2020, inclusive. Example:
Pad the beginning of a Text field with The [EEID] is a Text field containing numeric data of varying lengths, and you want
leading zeros. to create an [Employee ID] field that is always 7 characters long, including leading
zeros where needed.
Create a calculated field called [PaddedID] that adds enough zeros to the
beginning of [EEID] to create a full string, even if [EEID] is an empty string.
Example:
CONCAT(“0000000", [EEID])
Then create a calculated field called [Employee ID] that returns the last 7
characters of the [PaddedID] field. Example:
Return a particular date given a Date The [EFF_Date] field is a Date field, and you want to calculate and return the last
field. day of the current month.
To delete a Prism calculated field, access the Edit Dataset Transformations task, and select the menu for the Prism calculated field
you want to remove and select Delete Field. Deleting a field might cause errors if other Prism calculated fields refer to the deleted
field.
Steps
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 65/200
3/11/23, 9:33 PM Workday® Administrator Guide
Note: Workday won't consider the data values as comments if you enclose these characters and the comment within
double quotation marks.
Related Information
Concepts
Concept: Prism Calculated Fields
Concept: Hiding Dataset Fields
Concept: Prism Expression Language
Reference
Reference: Naming Guidelines
The Next Level: Prism Analytics Best Practices
Examples
Example: Bring in International-Formatted Numeric Fields
Prerequisites
Context
One way you transform data in a dataset is by adding a stage. There are different types of stages. You can add a new stage to the
end of any pipeline in your dataset. Some stage types can only be added to derived datasets.
Steps
The parameters you define depend on the type of stage that you're adding.
5. (Optional) Add a description while editing any stage except the Import stage of a derived dataset.
Related Information
Reference
Reference: Filter Stages
Reference: Group By Stages
Reference: Join Stages
Reference: Union Stages
Prerequisites
Prism Datasets: Create domain in the Prism Analytics functional area when creating a dataset.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 66/200
3/11/23, 9:33 PM Workday® Administrator Guide
Context
You can use the Manage Fields stage to view field changes, select fields, and edit fields.
Note: Decide on field names before you define Prism calculated fields and stages. Changing a field name later on will break Prism
calculated field expressions and stages that rely on it.
Steps
Option Description
Input Name Clear the check box to hide the field from future stages.
You can hide fields to protect sensitive data or to use a
calculated field instead of the fields that it's based on.
Hide unpopulated or sparse fields or Prism calculated
fields that do interim processing.
Output Type The field type determines which functions can use the
field as an argument. Create a Prism calculated field to
change a field type to a Date or Currency field type or to
change a Currency field type to a numeric field type.
Related Information
Concepts
Concept: Hiding Dataset Fields
Concept: Prism Calculated Fields
Concept: Table and Dataset Field Types
Tasks
Add a Prism Calculated Field to a Dataset
Convert Dataset Text Fields to Date Fields
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 67/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
After importing external data into a dataset, if Workday assigned a different field type than the one you want, then you can change
the field type. You might want to change the field type to:
Example: You could change an Integer field type to a Long field type to use the EPOCH_MS_TO_DATE function on it.
Example: If Workday doesn't automatically recognize an instance value, change it to the Instance field type.
Example: If Workday doesn't automatically recognize a date value, change it to the Date field type.
Use the Manage Fields stage to change most field types. However, create a Prism calculated field to make these field type
changes:
Steps
Related Information
Concepts
Concept: Table and Dataset Field Types
Concept: Prism Calculated Fields
Tasks
Add a Prism Calculated Field to a Dataset
Reference
Reference: Currency Format Requirements for External Data
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 68/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
When you create a dataset using external data, the data might contain date or time information. When you create a dataset,
Workday reads the data to determine which field type to assign to each field. If Workday recognizes the format of a date field, it
automatically assigns the Date field type. However, Workday only recognizes some date formats. If Workday doesn't recognize a
date format, then it assigns the Text field type to the field.
If Workday doesn't recognize a date field format, you can create a Prism calculated field to convert the Text field type to a Date field
type.
Steps
Suppose that you have a field called start_date that contains data that looks like 25-May-2017. Use this expression:
TO_DATE([start_date], "dd-MMM-yyyy")
5. In the inspector panel, enter a name. Field names must follow name validation rules.
6. Save the Prism calculated field by clicking Enter on your keyboard.
Clicking another field on the page also saves the changes to the Prism calculated field.
Result
Prism creates a new field and populates it by converting the data to the Date field type.
Related Information
Concepts
Concept: Table and Dataset Field Types
Concept: Prism Calculated Fields
Tasks
Change Dataset Field Types
Add a Prism Calculated Field to a Dataset
Reference
Reference: Supported Date Formats for External Data in Tables and Datasets
Reference: Naming Guidelines
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 69/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
Workday displays a subset of dataset rows to give you insight into your source data when you edit a dataset. Workday
automatically displays the first 20,000 records starting with the first file in the dataset. You can change the example data to select
between:
No Example
Default
Custom Example
You can define rules to curate what data you see and how each stage impacts it. When you apply rules to the custom
example data displayed, Workday reads the data from the files again and updates the statistics and field information for
each field in the inspector panel.
You might want to create custom example rows to get more precise statistics on each field in the inspector panel.
You can disable all example data temporarily to improve responsiveness if your dataset has a lot of fields. After you make some
changes to the dataset, such as adding a new stage or calculated field, you can then enable the example data by selecting the
number of rows to display.
Note: If you have permissions to view the View Dataset Transformations report, then you can change example data but won't be
able to save.
Steps
You can convert rule conditions created in Basic mode to rule expressions in Advanced mode. If you switch back to Basic
mode from Advanced mode, you need to define rule conditions again. Workday doesn't convert Advanced mode rule
expressions to Basic mode rule conditions.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 70/200
3/11/23, 9:33 PM Workday® Administrator Guide
Result
Workday applies the rule to the example data and displays rows that meet the defined conditions.
Next Steps
You can copy and paste the rules for a base dataset to other derived datasets using the same base dataset by selecting the related
actions menu by the custom example rule.
Related Information
Reference
Reference: Boolean Expressions
2022R2 What's New Post: Custom Examples for Prism Dataset Transformations
When you're editing dataset transformations, you can view how your changes affect an example of the data. This view enables you
to see how different stages impact your data.
You can change the data displayed by selecting any of these example options:
No Example No data displayed. This option has the Dataset Editor permission or better on
fastest load time. the dataset.
Default Example Workday displays a subset of the data, Dataset Editor permission or better on
enabling you to assess how different the dataset.
stages affect a subset of your data.
Note: If you have permissions to view the View Dataset Transformations report, then you can change example data but won't be
able to save.
Workday displays up to 20,000 rows when you select either Default or Custom Example.
Example: Your dataset includes a Join stage using a right outer join and a Prism calculated field with a CASE function that
evaluates the value in a field from the secondary (right) pipeline. The example data might not find a match and return NULL for the
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 71/200
3/11/23, 9:33 PM Workday® Administrator Guide
field value, whereas the published data finds a match and returns a non-null value. As a result, the CASE function returns a different
value in the example data than in the published data.
Related Information
Reference
2022R2 What's New Post: Custom Examples for Prism Dataset Transformations
You can convert filter conditions created in Basic mode to filter expressions in Advanced mode. If you switch back to Basic mode
from Advanced mode, you need to define filter conditions again. Workday doesn't convert Advanced mode filter expressions to
Basic mode filter conditions.
Basic This mode displays prompts to help you define filter criteria,
creating filter conditions.
Advanced This mode enables you to define the filter criteria by writing
an expression (the filter expression). Filter expressions must
evaluate to true or false.
Basic Mode
Define these options when you configure Filter stages in Basic mode:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 72/200
3/11/23, 9:33 PM Workday® Administrator Guide
If All/If Any
Filter Condition Click Add Filter to add a new filter condition. Select a field
and operator from the prompts, and then enter a value in the
empty text field. Workday reads the value in the text field
exactly as is. You don't need to add any quotation marks or
escape characters for Text field types.
Advanced Mode
In Advanced mode, use the Prism expression language to write a boolean expression. Example: TO_STRING([zip code])
LIKE("94*")
You can insert single and multiline comments into any location within a Prism expression. Workday treats all text between these
characters as comments: /* */
Note: Workday won't consider the data values as comments if you enclose these characters and the comment within double
quotation marks.
Related Information
Concepts
Concept: Dataset Stages
Tasks
Add a Stage to a Dataset
Reference
Reference: Boolean Expressions
Reference: Join Stages
Reference: Union Stages
Reference: Currency Format Requirements for External Data
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 73/200
3/11/23, 9:33 PM Workday® Administrator Guide
Choose Grouping Fields Select 1 or more fields by which to group values together. If
you summarize (aggregate) values in a group that contains
different currency codes, Workday returns NULL values.
Average Average returns the average of all valid numeric values for
the specified grouping field. It sums all values in the
specified field and divides by the number of valid (NOT
NULL) rows. You can calculate the average on any numeric
field.
Count Count returns the number of rows for the specified grouping
field.
Max Max returns the largest (maximum) value from the specified
grouping field. You can calculate the maximum on any
numeric or date field.
Min Min returns the smallest (minimum) value from the specified
grouping field. You can calculate the minimum on any
numeric or date field.
Sum Sum returns the total of all values from the specified
grouping field. You can calculate the sum on any numeric
field.
Related Information
Concepts
Concept: Dataset Stages
Tasks
Add a Stage to a Dataset
Reference
Reference: Filter Stages
Reference: Union Stages
The Next Level: Prism Analytics Best Practices
You can include 2 datasets (dataset pipelines) in a Join stage. You can add additional Join stages in the pipeline if you need to join
multiple datasets.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 74/200
3/11/23, 9:33 PM Workday® Administrator Guide
Join Pipeline Select a dataset pipeline to join with the primary pipeline. If
there aren't any pipelines available, select Add Another
Pipeline to create a pipeline by importing a dataset.
Match Rows Select 1 or more fields from each dataset pipeline whose
values should match each other. Select the fields that
uniquely identify rows in each dataset pipeline. Defining the
matching rows is similar to defining a primary key/foreign
key relationship in relational database terms.
Join Type Select the join type. The join type specifies which rows from
each dataset pipeline to include in the join result.
Select Fields Select which fields from each pipeline to include in the join
result. Any field you don't include is dropped from that stage
in the pipeline and all later stages.
Related Information
Concepts
Concept: Dataset Stages
Tasks
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 75/200
3/11/23, 9:33 PM Workday® Administrator Guide
You can include 2 datasets (dataset pipelines) in a Union stage, and add additional Union stages in the pipeline if you need to
combine multiple datasets. If you're familiar with SQL, the Union stage is the equivalent of a UNION ALL operator.
Define at least 1 set of matched fields (field mapping) in a Union stage before saving it. If you don't specify a field in an input
dataset for a field mapping, Workday will use a NULL value in that field from that input dataset.
Match Fields—Union Output The name of the field that will be output from the Union
stage for each field mapping.
Match Fields—Primary Pipeline The field from the primary pipeline to match with a field from
the union pipeline.
Match Fields—Union pipeline name The field from the union pipeline to match with a field from
the primary pipeline.
Rematch Click this button if you want to discard all field mappings and
return to the default field mappings that Workday detects
and configures.
Clear All Click this button to discard all field mappings. Then define at
least 1 field mapping in the Union stage before saving it.
Include All Click this button to create a field mapping for each field in
the input dataset pipeline. Then you can select which fields
to match from the other input dataset pipeline.
You can create Prism calculated fields for input datasets in these locations:
In the original dataset. Any Prism calculated field you create in a dataset is available to both that dataset and any derived
dataset that imports it as an input dataset.
In the dataset pipeline of the derived dataset. Any Prism calculated field you create in a pipeline stage of a derived dataset
is available only to that derived dataset. It doesn't get pushed back to the original input dataset. You might want to create a
Prism calculated field in the pipeline of an imported dataset if you need a field to use a different field type, but you don't
want to change the field type of the original dataset. Example: You could create a Prism calculated field to change a zip
code field from Integer to Text to match it with a Text zip code field in another input dataset.
Related Information
Concepts
Concept: Dataset Stages
Tasks
Add a Stage to a Dataset
Reference
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 76/200
3/11/23, 9:33 PM Workday® Administrator Guide
For Workday to convert fields (columns) to rows in derived datasets, follow these requirements:
Requirement Description
Related Information
Concepts
Concept: Unpivot Stages
Reference
2020R2 What’s New Post: Unpivot Stage
Examples
Example: Unpivot Stock Vesting Data in a Dataset
A boolean expression is an expression that evaluates to true or false. You can use the Prism expression language to write boolean
expressions in:
Filter expressions. Use Advanced mode when configuring a dataset Filter stage.
Custom example rule expressions. Use Advanced mode when configuring a rule for the dataset custom example.
Prism calculated fields. Example: Use a boolean expression in the CASE function.
The comparison value must be of the same field type as the field in the expression.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 77/200
3/11/23, 9:33 PM Workday® Administrator Guide
You can also use logical operators (such as AND and OR) or arithmetic operators (such as + or /) to define more complex
expressions.
When the field name includes a space or a special character, enclose the field name in square brackets: [ticker symbol].
When the comparison value is for a Text field type, enclose the value in double quotes (""). Example:
Comments in Expressions
You can insert single and multiline comments into any location within a Prism expression. Workday treats all text between these
characters as comments: /* */
Note: Workday won't consider the data values as comments if you enclose these characters and the comment within double
quotation marks.
age >= 21
yyyy-MM-ddTHH:mm:ss:SSSZ
yyyy-MM-dd
Don't enclose comparison values for Date fields in quotation marks or use any other punctuation. If the date value is in Text format
rather than Date format, the value must be enclosed in quotes like all text values.
When specifying a range of dates, always write the earlier date first.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 78/200
3/11/23, 9:33 PM Workday® Administrator Guide
If the boolean expression is a shortened version of the full format, then any values not included are assigned a value of zero (0).
Example: the expression BETWEEN 2019-06-01 AND 2019-07-31 is equivalent to this expression:
The expression above doesn't include any values from July 31, 2019. To include values from July 31, 2019, use BETWEEN 2019-06-
01 AND 2019-08-01.
Related Information
Concepts
Concept: Prism Expression Language
Tasks
Change the Dataset Example Rows
Reference
Reference: Filter Stages
Comparison Operators
This example illustrates how to create an Unpivot stage in a dataset to transpose fields (columns) of data into rows of data.
Scenario
You have a CSV file of stock vesting data for your workers. The workers' stock vests in 3 installments, with a different number of
shares on each date. The file contains 1 row of data for each worker, and a separate field for each vesting date and the number of
shares that vested on each date.
Name Vest Date 01 QTY Vest 01 Vest Date 02 QTY Vest 02 Vest Date 03 QTY Vest 03
Prerequisites
Create a table by uploading a CSV file using the data in this example. See Steps: Create a Table by File Upload.
Create a derived dataset from the table. See Steps: Create a Derived Dataset.
Security: Prism Datasets: Manage domain in the Prism Analytics functional area.
Steps
1. From the View Dataset Details report of the derived dataset, click Edit.
2. Click Add Stage, and select Unpivot.
First, we'll unpivot the 3 date input fields, Vest_Date_01, Vest_Date_02, and Vest_Date_03.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 79/200
3/11/23, 9:33 PM Workday® Administrator Guide
3. Click the plus sign next to Output Values so you have a total of 3 pairs of prompts.
4. Select these values in the Input Fields and Output Values prompts:
Result
The output of the Unpivot stage contains these rows and fields:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 80/200
3/11/23, 9:33 PM Workday® Administrator Guide
Next Steps
(Optional) Add a Manage Fields stage after the Unpivot stage to hide the Quantity Schedule field. In this example, the Quantity
Schedule field contains data that is redundant with the Vesting Schedule field.
Related Information
Concepts
Concept: Unpivot Stages
Reference
Reference: Unpivot Stages
2020R2 What’s New Post: Unpivot Stage
This example describes how to use Prism Analytics to bring in external data that includes a numeric field containing International-
formatted numbers.
Scenario
Your company has a delimited file with numeric data formatted using periods to separate thousands and commas to separate
decimals. You want to bring this data into Prism Analytics as a Numeric field so that you can perform calculations on the data.
However, Workday only recognizes data in an external file as valid numeric data when it:
Key,Amount
1,"9.000,1222-"
2,"11.111,2333"
3,"9.999.999,34-"
4,"7.777.777,45"
5,"8.888,56"
In your file, no value has more than 4 digits after the decimal mark.
Prerequisites
Create a table by file upload, using the CSV file to define the source schema. The table should have these fields:
Create a derived dataset based on the table, and use DDS Intl as the dataset name.
Security:
Steps
1. Access the Edit Dataset Transformations task for the DDS Intl dataset.
2. Create a Prism calculated field that removes the thousands separators by replacing any period with an empty string.
a. Select Add Field.
b. Enter this expression in the expression editor, and press Enter or Return on your keyboard:
REGEX_REPLACE([Amount],"\.","")
c. In the inspector panel, enter this as the field name: Amount no thousands
3. Create a Prism calculated field that replaces the decimal comma with a decimal period.
a. Select Add Field.
b. Enter this expression in the expression editor, and press Enter or Return on your keyboard:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 81/200
3/11/23, 9:33 PM Workday® Administrator Guide
REGEX_REPLACE([Amount no thousands],",",".")
c. In the inspector panel, enter this as the field name: Amount decimal period
4. Create a Prism calculated field that adds any negative sign that exists at the end of the value to the front of the value.
a. Select Add Field.
b. Enter this expression in the expression editor, and press Enter or Return on your keyboard:
CASE WHEN SUBSTRING([Amount decimal period], LENGTH([Amount decimal period])-1, LENGTH([Amount decimal period])) = "-"
THEN CONCAT("-",REGEX_REPLACE([Amount decimal period],",",""))
ELSE [Amount decimal period]
END
c. In the inspector panel, enter this as the field name: Amount minus sign
5. Create a Prism calculated field that removes any minus sign at the end of the value.
a. Select Add Field.
b. Enter this expression in the expression editor, and press Enter or Return on your keyboard:
CASE WHEN SUBSTRING([Amount minus sign], LENGTH([Amount minus sign])-1, LENGTH([Amount minus sign])) = "-"
THEN SUBSTRING([Amount minus sign], 0, LENGTH([Amount minus sign])-1)
ELSE [Amount minus sign]
END
c. In the inspector panel, enter this as the field name: Amount final
6. Create a Prism calculated field that converts the [Amount final] Text field to a Numeric field.
a. Select Add Field.
b. Enter this expression in the expression editor, and press Enter or Return on your keyboard:
c. In the inspector panel, enter this as the field name: Amount Numeric
7. Select Add Stage and then select Manage Fields.
8. Hide these fields by clicking the eye icon:
Amount
Amount no thousands
Amount decimal period
Amount minus sign
Amount final
9. Select Done.
10. Save your dataset.
Result
Prerequisites
Security:
Prism: Manage File Containers domain in the Prism Analytics functional area when uploading a file.
Any of these security requirements:
Prism: Tables Owner Manage domain in the Prism Analytics functional area.
Prism: Tables Manage domain in the Prism Analytics functional area.
Table Owner permission on the table.
Table Editor permission on the table.
Can Delete Table Data permission on the table.
Can Insert Table Data permission on the table.
Can Truncate Table Data permission on the table.
Can Update Table Data permission on the table.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 82/200
3/11/23, 9:33 PM Workday® Administrator Guide
Context
You can change the rows of data in a table by creating and running a data change task. You can:
How you change the rows in the table depends on the operation you select, such as upsert or delete, and the source data. The
source you specify must contain the rows you want to change in the table, such as new rows to insert, or existing rows to update or
delete.
To create, edit, or view a data change task on a target table, you must have permission on the target table. Example: To create a
data change task using the upsert operation, you must have both insert and update permission on the target table.
Steps
Option Description
When you upload multiple files, each file must use the
same schema. Workday supports RFC 4180-compliant
delimited files. For more information, see RFC 4180.
4. On the Source Options step, define how to parse the data in the files you uploaded for file upload or SFTP sources.
a. Parse External Data in a Table.
b. Review the fields Workday created based on the parsed file, and modify the fields if necessary.
Select a field in the list and view the field details in the inspector panel on the right side. You might need to:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 83/200
3/11/23, 9:33 PM Workday® Administrator Guide
Change the Field Type when Workday assigns the wrong field type. Workday assigns the field type based on
the first few rows only. Example: Workday assigns the Numeric field type to a field with ZIP code data
because the example rows that it evaluates only contain numerals. However, based on your knowledge of
the source data, you know that some ZIP code values contain letters or a hyphen, so you change the field
type to Text.
Change other field attributes based on the field type to ensure that Workday correctly parses the data, such
as Digits Before, Digits After, or Date Format.
Option Description
Insert Workday keeps the existing data in the table and adds the new data in the
source. This operation is also known as Append.
Truncate and Insert Workday deletes the existing data in the table and replaces it with the data in
the source. This operation is also known as Replace.
Delete Workday deletes a row from the table when it matches a row in the source.
Update Workday updates a row in the table when it matches a row in the source.
To select this operation, you must use a table where you configured 1 field as
the external ID.
Upsert Workday updates a row in the table when a matching row already exists and
inserts the row when it doesn’t exist.
To select this operation, you must use a table where you configured 1 field as
the external ID.
7. On the Mapping step, select a field in the target table to use as the operation key for delete, update, or upsert operations.
You can specify 1 of these target table fields:
The field configured as the external ID in the table: You can use this field for delete, update, or upsert operations.
WPA_LoadID: You can use this field for delete or update operations only.
WPA_RowID: You can use this field for delete or update operations only.
8. Select a source field for each target field that you want to modify. Workday requires that you map any field used as the
operation key.
Workday lists the source fields that are compatible for a specific target field. If Workday doesn't list a source field you want,
verify the source field attributes, such as the digits before, digits after, or business object. You can navigate to the Source
Options step to change the field attributes for file upload and SFTP sources.
You can click Reset Matches to revert all mappings you changed to the simple match algorithm that Workday uses by
default. The simple match algorithm:
Is case insensitive.
Ignores spaces.
Ignores underscore characters.
Matches on the field API name.
Won't match fields with different field types.
9. On the Review step, verify the information. You can go back to any previous step and make any correction if necessary.
10. Click Finish and select Run Without Saving, Save and Run Now, or Save.
Result
When you save the data change task, Workday creates the data change task object and displays it on the Data Change Tasks tab of
the Data Catalog report.
When you run the data change task, Workday starts a data change activity to change the data in the table based on the data in the
source. You can view the data change activity progress and history on:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 84/200
3/11/23, 9:33 PM Workday® Administrator Guide
The Activities tab of the View Table Details report. To fix errors on a table following a data change, download the error file
from the data change activity. Workday only creates an error file for data change activities that use a file upload as the
source.
The Data Change Activities tab of the Data Catalog report.
The Prism Activities Monitor report. You can view all types of Prism-related activities together in the Prism Activities
Monitor report.
Next Steps
To fix errors on a table following a data change, download the error file from the data change activity on the Activities tab of the
View Table Details report. Workday only creates an error file for data change activities that use file upload or SFTP sources.
Related Information
Concepts
Concept: Data Change Tasks
Concept: Mapping Fields in Data Change Tasks
Concept: Creating Reports to Import into Tables and Datasets
Tasks
Create an SFTP Connection
Reference
Reference: Table Field Attributes
Reference: External Data Limits
2022R1 What's New Post: Report Sources for Data Change Tasks
2022R1 What's New Post: SFTP Sources for Data Change Tasks
2021R2 What's New Post: Prism Analytics Data Management
Prerequisites
Security:
Prism Datasets: Owner Manage domain in the Prism Analytics functional area.
Prism Datasets: Manage domain in the Prism Analytics functional area.
Table Editor permission on the table.
Table Owner permission on the table.
Can Delete Table Data permission on the table.
Can Insert Table Data permission on the table.
Can Truncate Table Data permission on the table.
Can Update Table Data permission on the table.
Context
You can create a schedule for running a data change task. You can:
Create a schedule for data change tasks that use any source type other than file upload.
Create 1 active schedule per data change task.
Define schedules to run on a recurring basis (Example: daily, weekly, or monthly).
Define schedules to run only if another Prism scheduled process completes at a status you specify.
Steps
1. Access the View Data Change Task report for the data change task you want to create a schedule for.
2. Select Data Change Task > Create Schedule from the related actions menu.
3. In Run Frequency, specify how often to run the data change task.
The choices include creating a dependent schedule.
4. Select the criteria for the schedule.
5. (Recurring schedules) As you configure the schedule, consider:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 85/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
Catch Up Behavior Select how many times the scheduled data change runs
after maintenance issues cause errors.
Option Description
Trigger on Status Select the status of the scheduled future process that
causes the data change task to run.
Time Delayed Configurations (Optional) Specify the number of days, hours, or minutes
to delay running the data change task after the trigger.
Result
Workday runs the data change task based on the criteria that you specify, creating a data change activity. View the status of all
scheduled processes on the Data Change Activities tab of the Data Catalog report or on the Prism Activities Monitor report.
Workday assigns a name to the schedule based on the name of the data change task and prepends Execute Data Change Schedule:
to the name. You can change the name of the schedule in the Request Name field when you edit the schedule. Workday displays
this name in the Prism Activities Monitor, Process Monitor, and Scheduled Future Processes reports to help you identify a specific
process request.
Next Steps
From the related actions menu of the data change task, you can:
Related Information
Reference
Reference: External Data Limits
2021R2 What's New Post: Data Change Tasks
A data change task is a Prism Analytics object that defines how to change the data in a Prism Analytics table using data from a
specified source. You can save data change tasks and run them at any time.
Create a data change task to output the transformed data from a derived dataset into a table. Example: You have a base dataset
with data from a custom report, and you have a derived dataset that transforms the data in the base dataset. You can create a data
change task that uses data from the derived dataset and loads it into a table.
You specify a data operation type, such as insert or delete, that instructs how to use the data in the source to change the table
rows. You can create 1 or more data change tasks per table.
You create, edit, and manage data change tasks from the Data Catalog. Access the Data Change Tasks tab on the Data Catalog
report to view all data change tasks you have permission on.
To create, view, or edit a data change task for a specific table, you must have permission to modify data in that table. Example: To
create a data change task on the Claims table using the upsert operation, you must have permission to insert and update data in
the Claims table.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 86/200
3/11/23, 9:33 PM Workday® Administrator Guide
You can:
Component Description
Source The object that contains the data that you want to load into the target table.
Example: A delimited file that you upload, or an SFTP server, or a Data Catalog
object, such as a dataset.
Depending on the source type, you might need to define additional source options,
such as parsing options.
Target table The table with the data you want to change using the source data.
You can only create a data change task using an existing table in the Data Catalog.
Target operation The data operation to perform on the target table using the source data. Workday
lists the operations supported for the selected source type.
Insert
Delete
Update
Upsert
Mapping The specifications for matching source fields to the target table fields.
Data Change Activities tab on the Data Catalog report. The Data Change Activities tab displays all activities for all data
change tasks and tables.
Activities tab on the View Table Details report. The Activities tab displays all activities for that table, both from data change
activities and other data load activities. Example: You might have a different data load activity if you created the table from
a file upload.
Related Information
Reference
2021R2 What's New Post: Prism Analytics Data Management
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 87/200
3/11/23, 9:33 PM Workday® Administrator Guide
When you create or edit a data change task, you define how to map fields from the source to target table fields. When you define
the field mappings:
Numeric fields are compatible when the digits before and after the decimal in the source field are less than or equal to the digits in
the target field. Example: You can map an Integer or Numeric (8,0) source field to a Numeric(10,0) or Numeric(12,2) target field, but
you can't map a Numeric(20,6) to a Numeric(20,5).
Instance and Multi-Instance fields are compatible when the business object for both fields are the same.
Related Information
Reference
2021R2 What's New Post: Prism Analytics Data Management
Prerequisites
Security: Prism: Manage Connection domain in the Prism Analytics functional area.
Context
You can create a connection to an SFTP server so that you can use it as a source in a data change task to load external data into a
table. SFTP connections:
When you run a data change task that uses the SFTP connection as the source, Workday connects to the SFTP server using the
configured authentication credentials and fetches the specified files. For the data change activity to succeed:
Each file from the server should be less than 1 GB compressed (less than 10 GB uncompressed approximately).
Use the Test Prism SFTP Connection task to verify that Workday can connect to the SFTP server and retrieve files.
Steps
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 88/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
Name Workday displays the name in the Data Catalog. You can change the name at
any time.
Description Enter a description that describes the SFTP server and configured file pattern.
File Name/Pattern Enter the filename or a filename pattern that represents 1 or more files.
The filename is case-sensitive. You can use the asterisk (*) and question
mark (?) characters as wild cards to specify a filename pattern. Use the
asterisk (*) to specify zero or more characters, and use the question mark (?)
to specify exactly 1 character.
To specify a port number, add it to the end of the domain name or IP address.
If you don't specify a port number, Workday uses port 22.
Directory The directory on the server that contains the files. Directory names are case-
sensitive. Include a leading slash (/) only for a full path, not a relative path.
Use Temp File Writes the imported data to a temporary file in Workday with a randomly
generated name. After the data import is complete, Workday automatically
renames the file to the correct name.
You might want to enable this option if the data import takes a very long time
and might not finish before the next scheduled time to import data from the
same server.
Host Key Fingerprint The encryption key that the SFTP server will use for SSH communications.
Authentication Method and Details Select the type of security authentication that the SFTP server uses:
Delete After Retrieval Deletes the files on the SFTP server after the data is loaded into the target
table. If Workday is unable to delete the files from the SFTP server, the data
retrieval fails.
Decompress Don't enable this option for data change tasks and tables.
You can transfer files that are compressed or not. For compressed files,
Workday only supports gzip compression.
Decrypt Using If you want to decrypt the imported files using Pretty Good Privacy (PGP),
select a PGP Private Key Pair.
Restricted To Select the environment in which you want to use the settings defined in the
Transport section.
If you leave this option empty, Workday applies the transport settings to each
environment in which the data change activity runs. When a data change
activity runs in a particular environment, such as Implementation or
Production, the transport settings only work if the Restricted To option
matches the current environment. When the current environment and the
configured environment don’t match, the data change activity fails and
retrieves no files from the SFTP server. You might want to restrict the
transport settings to a particular environment to avoid inadvertently
transferring test data to a non-test endpoint.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 89/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
Example: You create the SFTP connection and a data change task in an
Implementation environment and select Implementation in Restricted To.
Later, you migrate this SFTP connection and data change task to a
Production environment. The next time the data change activity runs, it fails.
To ensure that the data change activity runs successfully in the Production
environment, edit the SFTP connection and either clear the Restricted To
option or change it to Production.
Result
Workday creates the SFTP connection and displays it on the Connections tab in the Data Catalog.
Next Steps
Create a data change task using the SFTP connection as the source to load data into the table.
Related Information
Tasks
Create a Data Change Task
Reference
Reference: Naming Guidelines
2022R1 What's New Post: SFTP Sources for Data Change Tasks
Prerequisites
Security: Set Up: Assignable Roles domain in the Organization and Roles functional area.
Context
To enable sharing a table with another user or security group, you create tenant-specific roles that correspond to the table-related
Workday provided roles. Sharing tables is a way to control access to individual tables.
Steps
The name you enter in Role Name becomes the prompt value in the Permission column on the Edit Table Sharing
task.
The security groups you select in Role Assignees Restricted To determine which users and groups you can share a
table with.
The security groups you select in Assigned by Security Groups determine which users can share a table.
Add the roles in the table below, but substitute security groups that your organization needs instead of Prism Data Writer
and Prism Data Administrator:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 90/200
3/11/23, 9:33 PM Workday® Administrator Guide
Assigned by
Role Assignees Security
Role Name Workday Role Enabled for Self-Assign Restricted to Groups
Prism Table
Owner
(Workday
Owned)
Security
Administrator
Prism Table
Owner
(Workday
Owned)
Security
Administrator
Security
Administrator
Prism Table
Owner
(Workday
Owned)
Security
Administrator
Security
Administrator
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 91/200
3/11/23, 9:33 PM Workday® Administrator Guide
Assigned by
Role Assignees Security
Role Name Workday Role Enabled for Self-Assign Restricted to Groups
Prism Table
Owner
(Workday
Owned)
Security
Administrator
Security
Administrator
Security
Administrator
Security
Administrator
Security
Administrator
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 92/200
3/11/23, 9:33 PM Workday® Administrator Guide
Note: You can substitute any user-based security groups your organization has created for Prism-related tasks instead of
selecting Prism Data Writer or Prism Data Administrator.
Related Information
Concepts
Concept: Sharing Tables and Datasets
Tasks
Share a Table with Others
Prerequisites
Security: Set Up: Assignable Roles domain in the Organization and Roles functional area.
Context
To enable sharing a dataset with another user or security group, you create tenant-specific roles that correspond to the dataset-
related Workday provided roles. Sharing datasets is a way to control access to individual datasets.
Steps
The name you enter in Role Name becomes the prompt value in the Permission column on the Edit Dataset Sharing
task.
The security groups you select in Role Assignees Restricted To determine which users and groups you can share a
dataset with.
The security groups you select in Assigned by Security Groups determine which users can share a dataset.
Add the roles in the table below, but substitute security groups that your organization needs instead of Prism Data Writer
and Prism Data Administrator:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 93/200
3/11/23, 9:33 PM Workday® Administrator Guide
Assigned by
Role Assignees Security
Role Name Workday Role Enabled for Self-Assign Restricted to Groups
Prism Dataset
Owner
(Workday
Owned)
Security
Administrator
Prism Dataset
Owner
(Workday
Owned)
Security
Administrator
Prism Dataset
Owner
(Workday
Owned)
Security
Administrator
Note: You can substitute any user-based security groups your organization has created for Prism-related tasks instead of
selecting Prism Data Writer or Prism Data Administrator.
Related Information
Concepts
Concept: Sharing Tables and Datasets
Tasks
Share a Dataset with Others
Prerequisites
Prism: Tables Owner Manage domain in the Prism Analytics functional area.
Table Owner permission on the table.
Context
When you create a table, you're the table owner. Being the owner of a table means you have Table Owner permission on the table.
As a table owner, you can share it with other users and security groups. The table sharing feature is a way to control access to
individual tables.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 94/200
3/11/23, 9:33 PM Workday® Administrator Guide
You might want to share a table with someone else so they can edit it, or import it into a derived dataset.
Steps
Related Information
Concepts
Concept: Sharing Tables and Datasets
Concept: Relax Sharing Options
Concept: Sharing Datasets Using Relax Sharing Rules
Tasks
Set Up Table Sharing
Prerequisites
Prism Datasets: Owner Manage domain in the Prism Analytics functional area.
Dataset Owner permission on the dataset.
Context
When you create a dataset, you're the dataset owner. Being the owner of a dataset means you have Dataset Owner permission on
the dataset. As a dataset owner, you can share it with other users and security groups. The dataset sharing feature is a way to
control access to individual datasets.
To share a derived dataset, you need owner permission on the current dataset and either:
You might want to share a dataset with someone else so they can edit it, or import it into another (derived) dataset.
Note: If you want to share a derived dataset with someone, they must also have at least viewer permission on all upstream objects
up until you reach the base datasets and tables, or until you reach a dataset with Relax Sharing Rules enabled and functional.
Steps
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 95/200
3/11/23, 9:33 PM Workday® Administrator Guide
Option Description
Dataset Editor User with this permission can do all tasks of a dataset
viewer plus:
Dataset Owner User with this permission can do all tasks of a dataset
editor plus:
4. (Optional) Share the dataset with more users, or remove access from users listed in the table.
5. When you're done sharing the dataset with others, click OK.
Related Information
Concepts
Concept: Security in Prism Analytics
Concept: Relax Sharing Options
Concept: Sharing Datasets Using Relax Sharing Rules
Prerequisites
Security: Prism: Manage Data Source domain in the Prism Analytics functional area.
Context
Before you make Prism data in a table or dataset available for analysis, configure the security (security domains and securing
entities) that Workday applies to the data in the Prism data source. You configure the data source security by editing the table or
dataset, but Workday applies the security to the data in the Prism data source.
The configured securing entities work with the configured security domains and their security groups to determine which users
have access to which rows, fields, and field values in a Prism data source.
A securing entity is an Instance or Multi-Instance field that you use to constrain access to particular instance values for reporting
and analytics. A securing entity:
Use securing entities to control row-level and field value-level access in a Prism data source for users in constrained security
groups.
For a user to have access to a particular row or field value in a Prism data source, they must be a member of 1 of these security
groups:
An unconstrained security group that has permissions on a domain configured in the data source security.
A constrained security group that has permissions on a domain configured in the data source security, and the
corresponding securing entity is configured.
Workday restricts user access to data in a Prism data source for these security groups:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 96/200
3/11/23, 9:33 PM Workday® Administrator Guide
All unconstrained
Role-based constrained
Aggregation when role-based
Intersection when role-based
Workday has tested and supports using securing entity fields that use these business objects:
Company
Company Hierarchy
Cost Center
Cost Center Hierarchy
Person
Location Hierarchy
Region
Region Hierarchy
Supervisory Organization
Steps
1. Access the Edit Data Source Security task for the table or dataset you want to apply security to.
2. In the Domains prompt, select 1 or more security domains to use to determine who can see the Prism data source.
If you specify a security domain that has a constrained security group, then you must specify an appropriate securing entity.
3. (Optional) In the Securing Entities prompt, select 1 or more fields in the dataset. Workday lists the Instance or Multi-
Instance fields in the table or dataset that act as securing entities.
The securing entities work with the:
Data Source Security domains to determine row-level access for a user.
Field Level Security domains to determine field value-level access for a user.
Note: Workday uses any in common logic when evaluating the contextual security using a Multi-Instance field.
Note: When you specify more than 1 securing entity that relates to the same security group, Workday uses the OR condition
between them. Depending on how your security groups are set up, a user might see some additional rows or field values.
Make sure you test the report results to ensure that the report produces expected results for each user.
4. In the Default Domain(s) for Dataset Fields prompt, select 1 or more security domains that Workday applies to every field in
the Prism data source unless you override the domain for a particular field in the next section.
Note: When you add new fields to the table or dataset, Workday applies this default domain to the new fields. You might
want to consider specifying a domain with more restrictive access. Then you can override the default domain on a per field
basis to allow more access as necessary.
5. (Optional) You can select different domains to apply to specific fields to override the default domains.
6. Review any Security Configuration Audit messages to learn more about any issues with the configured securing entities
and domains.
7. (Optional) Click Back to make any changes to the configured security options based on the audit messages.
8. Select the Apply Security check box to apply your changes.
If you want to restrict access to rows using any of these security group types, Workday can't honor those restrictions:
Result
Workday saves the security information. You can view the current security status by selecting Actions > Security > View Data
Source Security.
Example
Suppose that you select these domains containing these security groups. To enforce contextual security at the row-level and field
value-level, then use these fields as securing entities:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 97/200
3/11/23, 9:33 PM Workday® Administrator Guide
Security Domain Contains This Security Group Use This Securing Entity
Next Steps
Create the Prism data source by enabling the table for analysis or publishing the dataset. Workday applies the security restrictions
to the data in the Prism data source.
Prism Analytics uses Workday's strong, flexible, and configurable security model to control access to data, objects, and tasks.
Which users have access to what data depends on where in the Prism data workflow they are. For more information on the
workflow, see Concept: Prism Analytics Data Management Workflow.
Phase 1 and Phase 2: Create and Edit Tables, Datasets, and Data Change Tasks
In the first and second phases in the data management workflow, you bring data into the Prism Analytics Data Catalog and then
transform it. You create and edit these objects in the Data Catalog:
Tables. Tables include metadata and all data rows in the table.
Derived datasets. Datasets include metadata and a subset of data as a small collection of example rows.
Data change tasks. Data change tasks include a small collection of example rows from the target table, and from the
source if it's another dataset or table.
When it comes to security with tables, data change tasks, and datasets, you control access to the metadata and data together.
Workday controls who can do what with tables and datasets in these ways:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 98/200
3/11/23, 9:33 PM Workday® Administrator Guide
Method Notes
Security administrator Your Workday security administrator can configure Workday security domains to grant groups
grants access of users to be able to create, edit, or manage tables and datasets. Depending on how the
administrator configures the security domains, some users might be able to create, view, edit,
or act as an owner of all tables and datasets. Your Workday security administrator can
configure these Workday security domains to grant access to groups of users:
Table sharing and dataset The user who created a table or dataset can share it with particular users and grant different
sharing levels of access to each user.
When you create a table or dataset, you're the table owner or dataset owner. Being the owner
means that you have Table Owner or Dataset Owner permission on the table or dataset. As an
owner, you can grant different levels of access by assigning permissions to another user.
Example: You can assign Table Viewer or Can Truncate Table Data permission to a table, or
Dataset Editor permission to a dataset.
Access to a table or dataset is unconstrained. This means that any user who can create or view a table or dataset can view all
fields and data (example data for datasets and data change tasks), regardless of the origin of the data. When you create a table or
base dataset from a Workday report, Workday removes all security domains configured for the business objects in the table or
dataset.
This unconstrained access only applies when you use these tasks and reports:
Instead, you define the data source security to apply to the data in the Prism data source before making the data available for
analysis.
You can't configure access on a data change task directly. Instead, Workday controls your access to a data change task based on:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 99/200
3/11/23, 9:33 PM Workday® Administrator Guide
Your target table permissions. Workday uses these table permission types:
Table Viewer
Table Editor
Table Owner
Can Delete Table Data
Can Insert Table Data
Can Update Table Data
The specified operation type. You need permission to change data in the table that is compatible with the specified
operation type. Example: To create or edit a data change task on the Claims table using the upsert operation, you must have
permission to insert and update data in the Claims table.
Your source access. You need permission to view the source, such as view permission on a custom report, source dataset,
or SFTP connection. You don't need view permission on the source when the type is file upload.
Workday doesn't enable users to have view permission only on a data change task. If you have permission to edit a data change
task, then you have permission to view it.
If you meet the source access requirement, then you can perform the actions below with the specified target table permissions:
Create a data change task. No Yes, but you can only select Yes
an operation type that is
compatible with your table
permissions.
Edit the data change task No Yes, but you can only select Yes
operation type. an operation type that is
compatible with your table
permissions.
Edit the data change task No Yes, but you need Yes, but you need
source. permission on the new permission on the new
source. source.
You configure the data source security by editing the table or dataset, but Workday applies the security to the data in the Prism
data source.
Define the data source security after the data is ready to be exposed to other users by giving them access to the Prism data
source.
To configure the data source security, you must access to the Prism: Manage Data Source security domain. For details, see Edit
Prism Data Source Security.
You can restrict access to the data in a Prism data source at these levels:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 100/200
3/11/23, 9:33 PM Workday® Administrator Guide
Data source-level. Specify 1 or more security domains that apply to the Prism data source that Workday creates. These
domains determine which users can see the Prism data source. This is sometimes known as table-level security. If you
don't configure any domain, Workday uses the Prism: Default to Dataset Access domain.
Row-level. Optionally, you can enforce row-level security by specifying in the Securing Entities prompt 1 or more Instance
or Multi-Instance fields in the table or dataset, such as Supervisory Organization. The securing entities work with the
configured data source-level security domains to determine which users have access to which rows (and field values) in a
Prism data source.
Field-level. Specify 1 or more security domains to apply to the fields in the Prism data source. These domains determine
which users can see each field in the Prism data source.
Field value-level. Workday uses any configured Securing Entities with the configured field-level security domains to
determine which users have access to which field values in a Prism data source.
The way you create a Prism data source and the security required depend on the Data Catalog object. For more information, see
Concept: Making Prism Data Available for Analysis.
Optionally, you can create a Prism data source without any data source security configured. When no data source security is
configured, Workday applies the Prism: Default to Dataset Access security domain to the Prism data source. The Prism: Default to
Dataset Access domain provides contextual access to a Prism data source based on your access to the underlying table or
dataset.
Related Information
Tasks
Share a Dataset with Others
Edit Prism Data Source Security
Workday enables you to have fine-grained control over what you can do with tables and datasets. Sharing tables and datasets is a
way to control access to individual tables and datasets.
When your tenant is set up for table and dataset sharing, table owners and dataset owners can share a table or dataset with
another user or security group. Example: You can control who can view a dataset, edit the schema of a table, insert data into a
table, or delete table data.
Most Workday roles are tied to an organization. However, table-related and dataset-related Workday roles are tied to an object type,
the table or dataset. By tying a role to an object type, Workday enables you to control which permissions a user inherits for a
particular table or dataset.
Workday maps each table-related and dataset-related role to a Workday owned security group, and that security group
automatically inherits permissions from 1 or more security domains.
Example: Workday maps the Workday role "Table Schema Editor" to the "Prism Table Schema Editor (Workday Owned)" security
group, and that security group inherits View and Modify permissions on the Prism: Tables Manage Schema domain.
Example: Workday maps the Workday role "Prism Dataset Editor" to the "Prism Dataset Editor (Workday Owned)" security group,
and that security group inherits View and Modify permissions on the Prism: Datasets Manage domain.
As a result of these connections, you can enable table and dataset sharing by creating a tenant-specific role and mapping it to a
table-related or dataset-related Workday role. The tenant-specific role becomes the table or dataset permission that you can share
with others.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 101/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Tasks
Share a Table with Others
Share a Dataset with Others
By default, to access the current dataset, you must have access to all upstream datasets and tables. Example: To share a derived
dataset, you must have owner permission on the derived dataset and at least viewer permission on all upstream datasets and
tables.
However, you can relax some permission requirements so that users require access on fewer upstream datasets and tables to
have access to the current dataset.
To relax the sharing restrictions with tables and datasets, Workday provides these options when you edit dataset sharing and table
sharing:
Relax Sharing Rules When you enable Relax Sharing Rules on a table or dataset:
Prevent Relax Sharing on Derived Datasets When you enable Prevent Relax Sharing on Derived Datasets,
Workday revokes the Relax Sharing Rules functionality on all
downstream derived datasets.
To configure these options, you must have access to the Prism: Manage Relax Sharing domain in the Prism Analytics functional
area.
Example: You create a table called Payrolls that contains sensitive data, and you create a derived dataset off of it called Filtered
Payrolls that filters out the sensitive data. You want Norman Chan to view the Filtered Payrolls dataset, but not the Payrolls table.
To accomplish this sharing scenario, you enable Relax Sharing Rules on Filtered Payrolls, and then assign Norman Chan viewer
permission on it. Now, Norman can view Filtered Payrolls and create a derived dataset from it even though he doesn’t have viewer
permission on the Payrolls table. Also, he can share the derived dataset he created without having owner permission on Filtered
Payrolls or Payrolls.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 102/200
3/11/23, 9:33 PM Workday® Administrator Guide
View the current dataset transformations. Relax Sharing Rules enabled and functional on the
current dataset or table and at least viewer
Copy the current dataset (derived datasets only). permission on each dataset or table that is imported
into the current dataset, or
Edit the current dataset or table. At least editor permission on the current dataset or table,
and either:
Share the current dataset or table (the action that is specific Owner permission on the current dataset or table, and either:
to owners).
View the lineage of the current table or dataset. At least viewer permission on the current dataset or table.
Note that when you view lineage, you only see the upstream
or downstream tables and dataset on which you have viewer
permission (or better).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 103/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Tasks
Share a Table with Others
Share a Dataset with Others
The relax sharing options work with table and dataset sharing permissions to determine which users can access a table or dataset.
The kind of access users have determines their ability to:
Scenario 1
In the first scenario, Beth Liu made multiple tables and datasets, and shared some of them with Norman Chan and Dawn Myers.
Prevent Relax
Relax Sharing Rules Sharing on Derived
Table or Dataset Owners Viewers Enabled Datasets Enabled
The dataset lineage for the Employee Grants dataset looks like this:
Beth Liu, Norman Chan, and Dawn Myers have this table and dataset access:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 104/200
3/11/23, 9:33 PM Workday® Administrator Guide
Filtered Payrolls Can share, edit, and view Can edit, can view No access
Filtered Stock Grants Can share, edit, and view No access No access
Although Norman Chan is an owner of Filtered Payrolls, he can't share it with others because he only has View permission on the
upstream dataset Payrolls.
Scenario 2
This scenario is based on the previous scenario. Beth Liu enabled the Relax Sharing Rules check box on the Filtered Payrolls and
Filtered Stock Grants datasets.
Prevent Relax
Relax Sharing Rules Sharing on Derived
Table or Dataset Owners Viewers Enabled Datasets Enabled
The dataset lineage for the Employee Grants dataset looks like this. The green check marks indicate that Relax Sharing Rules is
enabled.
Beth Liu, Norman Chan, and Dawn Myers have this table and dataset access:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 105/200
3/11/23, 9:33 PM Workday® Administrator Guide
Filtered Payrolls Can share, edit, and view Can share, edit, and view Can view
Filtered Stock Grants Can share, edit, and view Can share and view Can view
Employee Grants Can share, edit, and view Can view Can view
With Relax Sharing Rules enabled on Filtered Payrolls and Filtered Stock Grants:
Norman can share Filtered Payrolls because he no longer needs owner permission on the upstream object Payrolls.
Norman can share Filtered Stock Grants because he no longer needs owner permission on the upstream object Stock
Grants, but he can't edit Filtered Stock Grants because he doesn't have viewer permission on the upstream object Stock
Grants.
Norman can view Employee Grants because he no longer needs viewer permission on all upstream objects.
Dawn can view Filtered Payrolls, Filtered Stock Grants, and Employee Grants because she no longer needs viewer
permission on all upstream objects.
Scenario 3
This scenario is based on the previous scenario. Norman Chan created a derived dataset titled DDS Norman by importing the
Employee Grants dataset.
Prevent Relax
Relax Sharing Rules Sharing on Derived
Table or Dataset Owners Viewers Enabled Datasets Enabled
The dataset lineage for the DDS Norman dataset (as viewed by a Prism Data Administrator who can view all objects) looks like this:
Beth Liu, Norman Chan, and Dawn Myers have this table and dataset access:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 106/200
3/11/23, 9:33 PM Workday® Administrator Guide
Filtered Payrolls Can share, edit, and view Can share, edit, and view Can view
Filtered Stock Grants Can share, edit, and view Can share and view Can view
Employee Grants Can share, edit, and view Can view Can view
Norman Chan didn't share DDS Norman with Beth Liu, so Beth doesn't see it in her Data Catalog and she doesn't know that it exists.
Although Norman created DDS Norman (he's the owner), he isn't able to share it with other users because he only has Viewer
permission on Employee Grants, not Owner permission. To share a dataset, you must have Owner permission on all upstream
objects up until you reach a dataset or table with Relax Sharing Rules enabled and functional. On the dataset or table with Relax
Sharing enabled, you only need Viewer permission.
Scenario 4
This scenario is based on the previous scenario. Beth Liu enabled the Relax Sharing Rules check box on the Employee Grants
dataset.
Prevent Relax
Relax Sharing Rules Sharing on Derived
Table or Dataset Owners Viewers Enabled Datasets Enabled
The dataset lineage for the DDS Norman dataset (as viewed by a Prism Data Administrator who can view all objects) looks like this:
Beth Liu, Norman Chan, and Dawn Myers have this table and dataset access:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 107/200
3/11/23, 9:33 PM Workday® Administrator Guide
Filtered Payrolls Can share, edit, and view Can share, edit, and view Can view
Filtered Stock Grants Can share, edit, and view Can share and view Can view
Employee Grants Can share, edit, and view Can view Can view
With Relax Sharing Rules enabled on Employee Grants (Norman has Viewer permission on Employee Grants), Norman can share
DDS Norman with others.
Scenario 5
This scenario is based on the previous scenario. Norman Chan enabled the Relax Sharing Rules check box on DDS Norman, and
shared DDS Norman with Dawn Myers, giving her Owner permission.
Prevent Relax
Relax Sharing Rules Sharing on Derived
Table or Dataset Owners Viewers Enabled Datasets Enabled
The dataset lineage for the DDS Norman dataset (as viewed by a Prism Data Administrator who can view all objects) looks like this:
Beth Liu, Norman Chan, and Dawn Myers have this table and dataset access:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 108/200
3/11/23, 9:33 PM Workday® Administrator Guide
Filtered Payrolls Can share, edit, and view Can share, edit, and view Can view
Filtered Stock Grants Can share, edit, and view Can share and view Can view
Employee Grants Can share, edit, and view Can view Can view
DDS Norman No access Can share, edit, view Can share, edit, view
With Relax Sharing Rules enabled on DDS Norman while giving Dawn Myers Owner permission, Dawn can share, edit, and view DDS
Norman. Beth Liu is unaware that DDS Norman exists and that Dawn can share it with others.
Scenario 6
This scenario is based on the previous scenario. Beth Liu enabled the Prevent Relax Sharing on Derived Datasets check box on
Filtered Stock Grants.
Prevent Relax
Relax Sharing Rules Sharing on Derived
Table or Dataset Owners Viewers Enabled Datasets Enabled
The dataset lineage for the DDS Norman dataset (as viewed by a Prism Data Administrator who can view all objects) looks like this.
The red circle with the line through it indicates the dataset that has Prevent Relax Sharing on Derived Datasets enabled:
Beth Liu, Norman Chan, and Dawn Myers have this table and dataset access:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 109/200
3/11/23, 9:33 PM Workday® Administrator Guide
Filtered Payrolls Can share, edit, and view Can share, edit, and view Can view
Filtered Stock Grants Can share, edit, and view Can share and view Can view
Employee Grants Can share, edit, and view Can view Can view
Now that Prevent Relax Sharing on Derived Datasets is enabled on Filtered Stock Grants:
Because Prevent Relax Sharing on Derived Datasets is enabled on Filtered Stock Grants, it cancels out the effect of Relax Sharing
Rules being enabled on both Employee Grants and DDS Norman.
Related Information
Tasks
Share a Table with Others
Share a Dataset with Others
Prerequisites
Security Activation
Security Configuration
Context
When you create and edit datasets, you typically need to build reports and visualizations to test the data you’re transforming in the
dataset. To test your work, you must publish your dataset, which requires access to the Prism Datasets: Publish domain.
You can set up your tenant to restrict users to publish datasets based on their contextual access to the dataset. Contextual
publishing enables dataset users to publish a dataset based on their dataset permission, such as Dataset Owner or Dataset Editor.
You might want to enable contextual publishing to enable dataset users to test their work without giving them unconstrained
access to publish all datasets.
To enable contextual publishing, you must have already enabled dataset sharing by creating assignable roles that correspond to
dataset-related Workday roles.
Steps
Dataset Owner
Dataset Editor
Select Role has access to the positions they support for the Access Rights to Multiple Job Workers option.
You can create a role-based (constrained) security group to enable users to publish datasets they have Dataset Owner
permission on. Example:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 110/200
3/11/23, 9:33 PM Workday® Administrator Guide
Access Rights to Multiple Job Workers Role has access to the positions they support
2. Edit the domain security policy for the Prism Datasets: Publish domain in the Prism Analytics functional area.
Add the role-based (constrained) security group you created, and assign both View and Modify task permissions.
3. Activate pending security policy changes.
Prerequisites
Security: Prism Datasets: Publish domain in the Prism Analytics functional area.
Context
Steps
1. Access the View Dataset Details report for the dataset you want to publish.
2. Click Publish, or from the related actions menu, select Publishing > Publish Dataset.
Workday:
Reads the source data in the dataset.
Transforms the source data using the transformation logic defined in the dataset.
Creates a Prism data source and loads it with the transformed data. The Prism data source has the same name as
the dataset API name.
Applies the appropriate security restrictions to the data.
3. View the publishing process status from the Publishing Activities tab on the View Dataset Details report.
Refresh the browser to see the most current status. The status includes the date and time that Workday last published the
dataset successfully. The last successful published date informs you about the freshness of the data in the Prism data
source. Example: If the last successful publish date is 1 week ago, but your publishing schedule is set to publish daily, this
discrepancy could indicate a failure in the publishing process.
You can also view the status of manually run publish requests in the Process Monitor report.
4. (Optional) You can cancel the publishing process by clicking Cancel Publishing on the View Dataset Details report.
Result
You can view the Prism data source by accessing the View Prism Data Source report, and selecting the name of the dataset you
published.
Related Information
Tasks
Unpublish a Dataset
Reference
The Next Level: Prism Analytics Best Practices
The Next Level: Prism Performance and Troubleshooting Tips
Prerequisites
Security: Prism Datasets: Publish domain in the Prism Analytics functional area.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 111/200
3/11/23, 9:33 PM Workday® Administrator Guide
Context
You can create a publish schedule for a dataset. You can schedule the publish to run:
You can't publish the same dataset at times that overlap with each other.
Steps
1. Access the View Dataset Details report for the dataset you want to publish.
2. From the related actions menu, select Publishing > Create Schedule.
3. In Run Frequency, specify how often to publish the dataset. The choices include creating a dependent publish schedule.
4. Select the criteria for the schedule.
5. (Recurring schedules) As you configure the schedule, consider:
Option Description
Catch Up Behavior Select how many times the scheduled publish runs after
maintenance issues cause errors.
Option Description
Trigger on Status Select the status of the scheduled future process that
triggers publishing the dataset.
Time Delayed Configuration (Optional) Specify the number of days, hours, or minutes
to delay publishing the dataset after the trigger. You
might want to delay publishing to review the latest
source files.
Workday assigns a name to the schedule based on the name of the dataset and prepends Publish Schedule: to the name.
You can change the name of the schedule in the Request Name field when you edit the schedule. Workday displays this
name in the Process Monitor and Scheduled Future Processes reports to help you identify a specific process request.
Result
Workday publishes a dataset based on the criteria that you specify. When publishing, Workday:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 112/200
3/11/23, 9:33 PM Workday® Administrator Guide
View the status of all scheduled publishing processes on the Dataset Activities tab of the Data Catalog report or on the Prism
Activities Monitor report.
Refresh the browser to see the most current status. The status includes the date and time (UTC) that Workday last published the
dataset successfully. The last successful publish date informs you about the freshness of the data in the Prism data source.
Example: If the last successful publish date is 1 week ago, but you set your publish schedule to publish daily, this discrepancy
could indicate a failure in the publishing process.
Related Information
Concepts
Concept: Security in Prism Analytics
Reference
FAQ: Dataset Publish Schedules
The Next Level: Prism Analytics Best Practices
Examples
Example: Create Dependent Publish Schedules for Datasets
You can make Prism data in the Data Catalog available for analysis by creating a Prism data source from either a table or dataset.
When Workday creates a Prism data source, it:
Loads the data source with the data from the table or dataset.
Applies the appropriate security restrictions to the data source, fields, records, and field values.
The way you create a Prism data source depends on the Data Catalog object:
Table Use the Enable for Analysis option when you Any of these security requirements:
create or edit the table schema. See Edit a
Table.
Prism Datasets: Owner Manage domain
Dataset Publish the dataset. You can create a publish Prism Datasets: Publish domain
schedule or publish a dataset manually on an
ad hoc basis. See Create Dataset Publish
Schedules and Publish a Dataset as a Prism
Data Source Manually.
Workday applies the security domains configured in the Edit Data Source Security task for the table or dataset.
When you first create a table or dataset, no security domain is applied to the data source. However, you can still create a Prism
data source if you haven't specify a security domain for it. Workday applies the Prism: Default to Dataset Access security domain if
no domain has been configured. The Prism: Default to Dataset Access domain provides contextual access to a Prism data source
based on your access to the underlying table or dataset.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 113/200
3/11/23, 9:33 PM Workday® Administrator Guide
Workday recommends making Prism data available analysis after you edit the data source security for the table or dataset.
Related Information
Concepts
Concept: Prism Data Sources
Tasks
Unpublish a Dataset
Reference
The Next Level: Prism Analytics Best Practices
The Next Level: Prism Performance and Troubleshooting Tips
Schedules for publishing datasets enable you to specify when, how often, and under what criteria to publish a dataset. Publish
schedules differ from publishing immediately on an ad hoc basis using Run Now.
After you create a publish schedule, consider these actions that you can perform on it:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 114/200
3/11/23, 9:33 PM Workday® Administrator Guide
Action Description
Change Schedule (recurring schedules only) Edit the run frequency (daily, monthly, weekly), start time, and
date range for the publish schedule. You can also change to
another scheduled recurring process.
Edit Environment Restrictions Select the environment in which you want the scheduled
publish to run.
Edit Schedule
(Recurring Schedules) Edit the schedule name, recurrence
criteria, and range of recurrence dates.
Transfer Ownership
Transfer ownership of a publish schedule. Every process
must have an assigned owner for the process to run.
View All Occurrences (recurring schedules only) View all future occurrences of a publish schedule within a
specified range of dates and times.
View Schedule
(Recurring schedules) View schedule details, such as
recurrence criteria, error messages, the schedule owner and
creator, and the next 10 scheduled launches if applicable.
Related Information
Tasks
Create Dataset Publish Schedules
Reference
FAQ: Dataset Publish Schedules
The Next Level: Prism Analytics Best Practices
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 115/200
3/11/23, 9:33 PM Workday® Administrator Guide
The last integration that completes before publishing begins determines the freshness of data in a Prism data source. To ensure
that data is as fresh as possible, schedule enough time for an integration to complete before a dataset publishing job begins.
As you create publish schedules for datasets that have scheduled integrations, consider how these scenarios affect the freshness
of data in the Prism data source:
Scenario Impact
Integration completes before a scheduled publish begins. The published dataset will use the latest available data from
the dataset source file.
Integration completes after a scheduled publish begins. The published dataset will use the data from the dataset
source file that existed before integration began.
To help ensure that integration completes before publishing begins, you can create a publish schedule that depends on the
successful integration of the data.
Related Information
Tasks
Create Dataset Publish Schedules
Scenario
As an HR Analyst, you’re responsible for creating a weekly report that includes employee hiring and turnover data from your New
York office. You’d like the data in the report to be as fresh as possible. You need to:
Bring in the latest hiring and turnover data from your New York office every Monday morning.
Make the data available for reporting only after you’ve brought in the latest data.
Prerequisites
Create a dataset from SFTP integration. Schedule the integration to occur every Monday at 8 a.m. EST. Name the dataset
New York Hiring and Turnover Weekly. Run the integration once to enable the publish option.
Security: Prism Datasets: Publish domain in the Prism Analytics functional area.
Steps
1. Access the View Dataset Details report for the New York Hiring and Turnover Weekly dataset.
2. From the related actions menu of the View Dataset Details report, select Publishing > Create Schedule.
3. In Run Frequency, select Dependent.
4. In Dependency, select Prism > Dataset Integration Schedule: New York Hiring and Turnover Weekly.
5. In Trigger on Status, select Completed.
Result
Workday publishes the New York Hiring and Turnover Weekly dataset only after the successful integration of data from the SFTP
server.
View the publish schedule request in the Process Monitor and Scheduled Future Processes reports.
View the status of all scheduled publish processes on the Dataset Activities and Prism Activities Monitor reports.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 116/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Making Prism Data Available for Analysis
Concept: Dataset Publish Schedules
Tasks
Create Dataset Publish Schedules
Steps: Create a Dataset with External Data (SFTP Server)
From the related actions menu of the View Dataset Details report, select Publishing > Edit Schedule.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Edit
Scheduled Occurrence.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Change
Schedule.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Edit
Environment Restrictions.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Delete.
From the related actions menu of the View Dataset Details report, select Publishing > Expire Schedule.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Suspend.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Activate.
Note: You can only activate expired schedules.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 117/200
3/11/23, 9:33 PM Workday® Administrator Guide
From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > View All
Occurrences.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Run Now.
1. From the related actions menu of the View Dataset Details report, select Publishing > View Schedule.
2. From the related actions menu of the View Scheduled Future Process report, select Schedule Future Process > Transfer
Ownership.
Related Information
Concepts
Concept: Making Prism Data Available for Analysis
Concept: Dataset Publish Schedules
Tasks
Create Dataset Publish Schedules
Prerequisites
Context
You can remove all data in a table by truncating the table. Truncating a table removes the data, but retains the schema. You might
want to truncate a table if the table contains some bad data.
If you've previously published a derived dataset based on this table, the associated Prism data source still contains data. If you
also want to remove the data from the associated Prism data source, then you must publish the derived dataset again. Publishing a
derived dataset from a truncated table makes the associated Prism data source empty, but active.
If you selected the Enable for Analysis option for this table, then truncating the table also removes all data from the associated
Prism data source. The Prism data source is empty, but active.
Steps
1. Access the View Table Details report for the table you want to truncate.
2. Select Truncate Data from the Quick Actions menu.
Prerequisites
Security:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 118/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prism: Manage File Containers domain in the Prism Analytics functional area when uploading a file.
Any of these security requirements:
Prism: Tables Owner Manage domain in the Prism Analytics functional area.
Prism: Tables Manage domain in the Prism Analytics functional area.
Table Owner permission on the table.
Table Editor permission on the table.
Can Delete Table Data permission on the table.
Can Truncate Table Data permission on the table.
Context
By load ID You can quickly delete all rows that came from a particular data load activity.
Select Table > Delete Rows from the related actions menu of the table. Then select
Previous Loads.
By data change task You can delete specific rows that you specify using a data change task with the
delete operation.
You delete rows by specifying one these target fields as the Delete Key:
WPA_LoadID
WPA_RowID
To delete rows using a data change task, you need a delimited file that contains a
single field that contains the values in the delete key field. Example: You can delete
the rows for particular values of the field configured as the external ID. Create a
CSV file that contains the values of the external ID field you want to delete. To
delete some rows from the ClaimID field, your CSV file might look like:
ClaimID
345999
345600
345601
345602
Steps
1. Access the View Table Details report for the table to delete from.
2. Select Quick Actions > Data Change Task.
3. (Optional) Change the data change task name that Workday created automatically at the top of the left side panel.
4. On the Source step, select the file that contains the values of the rows you want to delete.
5. On the Source Options step, define how to parse the data in the files.
6. On the Target step, select Delete as the Target Operation.
7. On the Mapping step, select a field in the target table to use as the Delete Key.
8. Select a source field for the target field that you specified as the delete key.
Related Information
Concepts
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 119/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Context
You can remove the data in a base dataset by truncating the dataset. Truncating a dataset removes the data from the dataset, but
retains its metadata, such as schema and transformations.
The dataset integration uses Append mode and after several integrations the dataset contains some bad data.
The dataset contains data that your organization later deems to be sensitive.
If you've previously published this dataset or a derived dataset based on this dataset, the associated Prism data source still
contains data. If you also want to remove the data from the associated Prism data source, then you must publish the dataset
again. Publishing a truncated dataset makes the associated Prism data source empty, but active.
Steps
1. Access the View Dataset Details report for the dataset you want to truncate.
2. Select Actions > Dataset > Truncate.
Related Information
Concepts
Concept: Deleting Prism Data
Prerequisites
Context
After you publish a dataset, you can unpublish it if necessary. When you unpublish a dataset, Workday removes the Prism data
source that is based on the dataset, including all data in it. You might want to unpublish a dataset if you need to delete the dataset.
You can only unpublish a dataset if no reports use the associated Prism data source.
Steps
1. Access the View Dataset Details report for the dataset you want to unpublish.
2. Select Actions > Publishing > Unpublish Dataset.
Result
Related Information
Concepts
Concept: Deleting Prism Data
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 120/200
3/11/23, 9:33 PM Workday® Administrator Guide
Prerequisites
Security: Prism Datasets: Publish domain in the Prism Analytics functional area.
Context
When you publish a dataset, Workday creates a Prism data source and populates it with rows containing the transformed data
from the dataset. You can delete the rows in a Prism data source created from a published dataset. When you delete rows in a
Prism data source, the Prism data source is empty and becomes inactive. If you need to delete rows from a Prism data source
created from a table enabled for analysis, then truncate the table.
You might want to delete the rows in a Prism data source if the rows contain incorrect data and you don’t want analysts creating
reports using the bad data. You can then edit the dataset to correct the transformation logic and publish the dataset again. When
you republish the dataset, Workday populates the empty, inactive Prism data source with the new data.
Any reports that use an inactive Prism data source will be broken until you publish the dataset again.
Steps
1. Access the View Prism Data Source page, and select the Prism data source whose rows you want to delete.
2. Select Actions > Prism Data Source > Delete Published Rows.
Result
Workday removes all data from the Prism data source, and changes the Prism data source status to inactive.
You create tables, datasets, and Prism data sources filled with data. However, there might be times when you need to remove data
from your tenant.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 121/200
3/11/23, 9:33 PM Workday® Administrator Guide
Action Notes
Make the Prism data source inactive. When a Prism data source is inactive, it exists in the tenant, but is empty and
unavailable for querying in reports and discovery board visualizations.
You might want to make a Prism data source inactive if the data source contains
incorrect data and you don’t want analysts creating reports using the bad data.
You can make a Prism data source inactive whether or not any reports or
visualizations use it.
How you make a Prism data source inactive depends on the object type it's based
on:
Table. Edit the table schema and clear the Enable for Analysis option on
the Edit Table task.
Dataset. Delete the rows in the Prism data source. Select Prism Data
Source > Delete Published Rows from the related actions on the View
Prism Data Source report.
Delete the Prism data source including You can only remove a Prism data source when no reports or vizzes currently use
all data in it. the associated Prism data source.
How you remove a Prism data source depends on the object type it's based on:
Table. Delete the table from the Data Catalog. Right-click the table from the
Data Catalog and select Delete.
Remove all data from a dataset You can remove the data in a base dataset by truncating the dataset. Truncating a
(truncate). dataset removes all data from the dataset. However, the dataset retains its
metadata, such as schema and transformations.
You might want to truncate a dataset if the dataset integration uses Append mode
and after several integrations the dataset contains some bad data.
Delete a dataset. When you delete a dataset, Workday removes the dataset definition from the Data
Catalog and removes the source data stored on disk in your tenant.
You might want to delete a dataset to make more space available to store data in
your Data Catalog. You can only delete a dataset if it's not currently published and
isn't imported into any derived dataset. Right-click a dataset from the Data Catalog
and select Delete.
Delete rows from a table. You can delete 1, multiple, or all rows from a table. When you delete rows from a
table, the table remains in the Data Catalog.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 122/200
3/11/23, 9:33 PM Workday® Administrator Guide
Action Notes
Truncate a table. When you truncate a table, Workday removes all rows from the table, and keeps
the empty table in the Data Catalog.
If you selected the Enable for Analysis option for this table, then truncating the
table also removes all data from the associated Prism data source. The Prism
data source is empty, but active.
Delete a table. When you delete a table, Workday removes the table definition from the Data
Catalog, including all data contained in it.
Right-click the table from the Data Catalog and select Delete.
Related Information
Tasks
Truncate Data in a Dataset
Unpublish a Dataset
Delete Rows from a Prism Data Source
Prerequisites
Security: Manage: Analytic Data Sources domain in the Analytical Framework functional area.
Context
Install an analytic data source and define a schedule for updating the data in it.
Steps
Result
Workday creates the analytic data source and its schedule. Workday loads data from Workday business objects into the data
source at the next scheduled time.
Next Steps
You can:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 123/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Concepts
Concept: Workday-Delivered Analytic Data Sources
Concept: Managing Analytic Data Source Schedules
Prerequisites
Security: Manage: Analytic Data Sources domain in the Analytical Framework functional area.
Context
You can uninstall an analytic data source to remove all data from that data source. You might want to do uninstall the data source
if you encounter a critical error while using it.
Any reports that use the data source will display no results and display an error the next time you run the report.
You can re-install the data source before running any existing reports that use the data source.
Steps
An analytic data source is a Workday-blended and delivered data source updated on a regular schedule that can contain Workday
data or external data.
Analytic data sources use Prism Analytics and are designed for high performance and analytical reporting. Although analytic data
sources can contain Workday data, the data isn't updated in real time like Workday-delivered data sources directly based on
business objects. When you install the analytic data source, you define how often the data updates, such as daily, weekly, or
monthly.
Runs a background job that extracts data from the Workday or external sources.
Blends and transforms the data using Prism Analytics.
Loads the data into the data source.
This processing takes time, depending on how much data is in your tenant and how often you schedule the data updates. As a
result, the freshness of the data in your analytic data source depends on both the schedule frequency and the processing time
required to update the data.
Related Information
Tasks
Install and Schedule an Analytic Data Source
After you install an analytic data source, you can manage the schedule you created for the data source. You need access to the
Manage Analytic Data Sources domain in the Analytical Framework functional area to manage the schedule.
You can use these tasks and reports to manage the analytic data source schedule:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 124/200
3/11/23, 9:33 PM Workday® Administrator Guide
Cancel Analytic Data Source Publish Schedule You can only cancel a schedule that currently exists.
Create Analytic Data Source Publish Schedule You can only create a schedule if you previously canceled
the schedule.
Edit Analytic Data Source Publish Schedule Change the schedule settings on the Schedule tab.
The Peakon Employee Voice analytic data source enables you to create custom employee engagement reporting. It extracts non-
aggregated answers from the Peakon system, enabling you to analyze your Peakon data in more detail.
This analytic data source includes scores and comments from all question sets that you measure on your Peakon Employee Voice
survey, including these:
Engagement
Diversity and Inclusion
Health and Wellbeing
COVID-19
All company-specific questions
Consider the sensitive nature of the Peakon scores data as you interact with this analytic data source and its underlying dataset.
All scores and comments are attached to an employee ID, making it possible to identify the individual scores and comments left by
an employee on their engagement survey. This analytic data source is designed for companies with the appropriate safeguards.
Implementing a fully transparent survey requires a lot of trust from your employees. Ensure this trust by securing their data through
your analysis in Workday. You secure this data by:
Configuring the security policy for the Peakon Responses Analytic Data Source domain.
Providing users access to the Peakon Employee Voice Raw Data dataset in the Data Catalog. All users with unconstrained
access to the Data Catalog can access the data in the Peakon dataset and can share it with Prism users who have
constrained access.
Workday recommends that you only provide access to this data to highly privileged users before sharing anonymized data with a
wider audience.
Note: To install this analytic data source:
Note: You can migrate between tenants any derived datasets, discovery boards, custom reports, or Prism data sources that you
create based on the Peakon Employee Voice Raw Data dataset. However, before you migrate these assets to a different tenant, you
must install the Peakon Employee Voice analytic data source in the target tenant.
1. Create or edit a user-based security group that contains the users who can use the data source and raw data dataset.
2. Edit the domain security policy for the Peakon Responses Analytic Data Source domain, and provide the security group both
View and Modify permissions.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 125/200
3/11/23, 9:33 PM Workday® Administrator Guide
3. Set Up Dataset Sharing. Add the user-based security group in the Role Assignees Restricted To column for the Prism
Dataset Viewer Workday Role.
4. Activate pending security policy changes.
Note: Consider the sensitive nature of the highly confidential Peakon scores data as you configure this domain security policy.
Workday recommends that you only provide access to this data to highly privileged users before sharing anonymized data with a
wider audience.
Related Information
Concepts
Concept: Workday-Delivered Analytic Data Sources
Tasks
Install and Schedule an Analytic Data Source
Prism Analytics includes its own expression language that's composed of functions and operators. You use the Prism expression
language to create an expression, which you can use to create new values or filter existing values.
An expression computes or produces a value by combining fields, constant values, operators, and functions. An expression outputs
a value of a particular field type, such as Numeric or Text values. Simple expressions can be a single constant value, the values of a
given field, or just a function. You can use operators to join two or more simple expressions into a complex expression.
Calculated fields. Use expressions to define calculated fields that operate on the source data. A calculated field expression
generates its values based on a calculation or condition, and returns a value for each input row. Calculated field
expressions can contain values from other fields, constants, mathematical operators, comparison operators, or built-in row
functions.
Filter stages. Use an expression in a Filter stage to limit the scope of the source data of a dataset.
The expression builder helps you create calculated field expressions in a dataset. It displays the available fields in the dataset, plus
the list of the Prism functions. It validates your expressions for correct syntax, input field types, and so on.
This example demonstrates how to use the CONCAT() function to concatenate the values in the month, day, and year fields
separated by the literal forward slash character:
CONCAT([month],"/",[day],"/",[year])
A function return value might be the same as its input type or it might be an entirely new field type. Example: The TO_DATE()
function takes a Text as input, but outputs a Date value. If a function expects a Text, but is passed another field type as input, the
function returns an error.
Typically, functions are classified by what field type they take or what purpose they serve. Example: CONCAT() is a text function and
TO_DATE() is a field type conversion function.
Nesting Functions
Functions can take other functions as arguments. Example: You can use the CONCAT function as an argument to the TO_DATE()
function. The final result is a Date value in the format 10/31/2014.
TO_DATE(CONCAT([month],"/",[day],"/",[year]),"MM/dd/yyyy")
The nested function must return the correct field type. So, because TO_DATE() expects text input and CONCAT() returns a text, the
nesting succeeds.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 126/200
3/11/23, 9:33 PM Workday® Administrator Guide
TO_INT([sales])
TO_INT([Sale Amount])
TO_INT([2013_data])
TO_INT([count])
If a field name contains a ] (closing square bracket), you must escape the closing square bracket by doubling it ]]. Suppose you
have this field name:
[Total Sales]
You enclose the entire field name in square brackets and escape the closing bracket that is part of the actual field name:
[[Total Sales]]]
To escape a literal quote within a literal value itself, double the literal quote character. Example:
The REGEX() function is a special case. In the REGEX() function, Text expressions are also enclosed in quotes. When a Text
expression contains literal quotes, double the literal quote character. Example:
REGEX([height], "\d\'(\d)+""")
yyyy-MM-ddTHH:mm:ss:SSSZ
Example:
If the Filter expression is a shortened version of the full format, then Prism assigns any values that aren't included a value of zero
(0). Example: This expression:
Is equivalent to:
If the date value is in Text format rather than a Date format, you must enclose the value in quotes.
To refer to a literal date value in a calculated field expression, you must specify the format of the date and time components using
TO_DATE, which takes a Text literal argument and a format string. Example:
Related Information
Tasks
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 127/200
3/11/23, 9:33 PM Workday® Administrator Guide
Comparison operators are used to compare the equivalency or inequivalency of 2 expressions of the same field type. The result of
a comparison expression is a Boolean value (returns true, false, or NULL for invalid, such as comparing a text value to a numeric
value). Boolean expressions are most often used to specify data processing conditions or filter criteria.
This expression compares the value in the age field to a literal number value. If true, it returns the appropriate boolean value.
BETWEEN min_value AND Test whether a date or numeric value is [year] BETWEEN 2014 AND 2016
max_value within the min and max values (inclusive).
Use logical operators in expressions to test for a condition. Logical operators define Boolean expressions.
You might want to use logical operators in Filter transformations or CASE expressions. Filters test if a field or value meets some
condition, such as testing if the value in a Date field falls between 2 other dates:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 128/200
3/11/23, 9:33 PM Workday® Administrator Guide
Weekday NOT
IN("Saturday","Sunday")
Arithmetic operators perform basic math operations on 2 expressions of the same field type resulting in a numeric value.
Example: You can calculate the gross profit margin percentage using the values of a [total_revenue] and [total_cost] field:
+ Addition [amount] + 10
- Subtraction [amount] - 10
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 129/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.5.1 | BUILD_CURRENCY
Description
BUILD_CURRENCY is a row function that constructs a Currency field from a numeric value and a Text or Instance value that contains
a valid currency code. When the currency code isn't valid, this function returns NULL.
Syntax
BUILD_CURRENCY(number_expression,currency_code_expression)
Return Value
Returns a value of type Currency.
Input Parameters
number_expression
Required. A field or expression of type Double, Numeric, Integer, or Long.
currency_code_expression
Required. A field or expression of type Text or Instance that contains valid currency code data.
Examples
Convert the values of the Sale Price field (Numeric type) to a Currency field type using the currency codes from the Currency Code
field:
1.17.5.2 | CAST
Description
CAST is a row function that converts data values from one field type (data type) to another.
Syntax
CAST(field_name AS field_type)
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 130/200
3/11/23, 9:33 PM Workday® Administrator Guide
Return Value
Returns one value per row of the specified field type.
Input Parameters
field_name
Required. A field or expression of a supported field type.
field_type
Required. The field type to convert the data values into.
To convert a field to the Text field type, specify String for this parameter.
When specifying Boolean as the field type, CAST converts the value of zero (0) to False, and all other values to True.
When specifying Instance or Multi-Instance as the field type, you must specify the business object using its unique identifier (WID).
Use this syntax:
Instance(business_object_WID)
Multi_Instance(business_object_WID)
When specifying Numeric as the field type, specify the number of digits to the left of the decimal point (integers) and the number of
digits to the right of decimal point (decimals). Use this syntax:
Decimal(integers,decimals)
Ensure that the number of integer digits specified is large enough to capture all possible data values. If the value for a row has
more integer digits than the number of integer digits specified in the function, then CAST returns NULL. Example: CAST(99.9 AS
decimal(1,1)) returns NULL.
Examples
Convert the WID values of the Region field from Text to Instance:
CAST([Region] AS Instance(eecb565181284b6a8ae8b45dc3ed1451))
CAST("3b122818d7934d1c8c663ddbe1937819" AS Instance(eecb565181284b6a8ae8b45dc3ed1451))
CAST([amount] AS string)
CAST([average_rating] AS decimal(1,2))
1.17.5.3 | EPOCH_MS_TO_DATE
Description
EPOCH_MS_TO_DATE is a row function that converts Long values to Date values, where the input number represents the number of
milliseconds since the epoch.
Syntax
EPOCH_MS_TO_DATE(long_expression)
Return Value
Returns one value per row of type Date in UTC format yyyy-MM-dd HH:mm:ss:SSS Z.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 131/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input Parameters
long_expression
Required. A field or expression of type Long representing the number of milliseconds since the epoch date (January 1, 1970
00:00:00:000 GMT).
Examples
Convert a number representing the number of milliseconds from the epoch to a human-readable date and time:
1.17.5.4 | EXTRACT_AMOUNT
Description
EXTRACT_AMOUNT is a row function that takes a Currency value and extracts the numeric amount as a Numeric value.
Syntax
EXTRACT_AMOUNT(currency_expression)
Return Value
Returns a value of type Numeric.
Input Parameters
currency_expression
Required. A field or expression of type Currency.
Examples
Get the numeric values from the Salary field (Currency field type):
EXTRACT_AMOUNT([Salary])
1.17.5.5 | EXTRACT_CODE
Description
EXTRACT_CODE is a row function that takes a Currency value and extracts the currency code as an Instance value.
Syntax
EXTRACT_CODE(currency_expression)
Return Value
Returns a value of type Instance.
Input Parameters
currency_expression
Required. A field or expression of type Currency.
Examples
Get the currency code information from the Salary field (Currency field type) as an Instance field:
EXTRACT_CODE([Salary])
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 132/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.5.6 | EXTRACT_CODE_TEXT
Description
EXTRACT_CODE_TEXT is a row function that takes a Currency value and extracts the currency code as a Text value.
Syntax
EXTRACT_CODE_TEXT(currency_expression)
Return Value
Returns a value of type Text.
Input Parameters
currency_expression
Required. A field or expression of type Currency.
Examples
Get the currency code information from the Salary field (Currency field type) as a Text field:
EXTRACT_CODE_TEXT([Salary])
1.17.5.7 | TO_BOOLEAN
Description
TO_BOOLEAN is a row function that converts Text, Boolean, Integer, Long, or Numeric values to Boolean.
Syntax
TO_BOOLEAN(expression)
Return Value
Returns one value per row of type Boolean.
Input Parameters
expression
Required. A field or expression of type Text, Boolean, Integer, Long, or Numeric.
Examples
Convert the values of the is_contingent field to a Boolean:
TO_BOOLEAN([is_contingent])
TO_BOOLEAN("TRUE")
TO_BOOLEAN("1")
TO_BOOLEAN(1.0)
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 133/200
3/11/23, 9:33 PM Workday® Administrator Guide
TO_BOOLEAN("False")
TO_BOOLEAN("0")
TO_BOOLEAN(0.0)
TO_BOOLEAN("correct")
TO_BOOLEAN("1.0")
TO_BOOLEAN(1.1)
1.17.5.8 | TO_CURRENCY
Description
TO_CURRENCY is a row function that converts Text values that contain valid currency-formatted data to Currency values.
Syntax
TO_CURRENCY(expression)
Return Value
Returns a value of type Currency.
Input Parameters
expression
Required. A field or expression of type Text that represents a valid currency-formatted value.
A valid currency-formatted value meets these requirements:
Examples
Convert this text value to a Currency field type:
TO_CURRENCY("1234.56 USD")
Convert the values of the Grant Price field to a Currency field type using the currency codes in the Grant Code field:
Convert the Sale Price field Text field to a Currency field, but first transform the occurrence of any N/A values to NULL values using
a CASE expression:
TO_CURRENCY(CASE WHEN [Sale Price]="N/A" then NULL ELSE [Sale Price] END)
1.17.5.9 | TO_DATE
Description
TO_DATE is a row function that converts Text values to Date values, and specifies the format of the date and time elements in the
string.
Syntax
TO_DATE(string_expression,"date_format")
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 134/200
3/11/23, 9:33 PM Workday® Administrator Guide
Return Value
Returns one value per row of type Date (which by definition is in UTC).
Input Parameters
string_expression
Required. A field or expression of type Text.
date_format
Required. A pattern that describes how the date is formatted.
Examples
Define a new Date Prism calculated field based on the order_date base field, which contains timestamps in the format of:
2014.07.10 at 15:08:56 PDT:
Define a new Date Prism calculated field by first combining individual month, day, year, and depart_time fields (using CONCAT), and
performing a transformation on depart_time to make sure three-digit times are converted to four-digit times (using
REGEX_REPLACE):
TO_DATE(CONCAT([month],"/",[day],"/",[year],":",
REGEX_REPLACE([depart_time],"\b(\d{3})\b","0$1")),"MM/dd/yyyy:HHmm")
Define a new Date Prism calculated field based on the created_at base field, which contains timestamps in the format of: Sat Jan
25 16:35:23 +0800 2014 (this is the timestamp format returned by Twitter's API):
Related Information
Reference
Reference: Date Format Symbols
1.17.5.10 | TO_DECIMAL
Description
TO_DECIMAL is a row function that converts Text, Boolean, Integer, Long, Double, or Numeric values to Numeric values with the
default number of digits before and after the decimal point.
Syntax
TO_DECIMAL(expression)
Return Value
Returns one value per row of type Numeric with the default number of digits before and after the decimal point.
Input Parameters
expression
Required. A field or expression of type Text (must be numeric characters), Boolean, Integer, Long, Double, or Numeric.
Examples
Convert the values of the average_rating field to a Numeric field type:
TO_DECIMAL([average_rating])
Convert the average_rating field to a Numeric field type, but first transform the occurrence of any NA values to NULL values using a
CASE expression:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 135/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.5.11 | TO_DOUBLE
Description
TO_DOUBLE is a row function that converts Text, Boolean, Integer, Long, or Double values to Double (a type of numeric) values.
Syntax
TO_DOUBLE(expression)
Return Value
Returns one value per row of type Double.
Input Parameters
expression
Required. A field or expression of type Text (must be numeric characters), Boolean, Integer, Long, or Double.
Examples
Convert the values of the average_rating field to a Double field type:
TO_DOUBLE([average_rating])
Convert the average_rating field to a Double field type, but first transform the occurrence of any NA values to NULL values using a
CASE expression:
1.17.5.12 | TO_INT
Description
TO_INT is a row function that converts Text, Boolean, Integer, Long, or Double values to Integer (whole number) values. When
converting Double values, everything after the decimal will be truncated (not rounded up or down).
Syntax
TO_INT(expression)
Return Value
Returns one value per row of type Integer.
Input Parameters
expression
Required. A field or expression of type Text, Boolean, Integer, Long, or Double. If a Text field contains non-numeric characters,
the function returns NULL.
Examples
Convert the values of the average_rating field to an Integer field type:
TO_INT([average_rating])
Convert the flight_duration field to an Integer field type, but first transform the occurrence of any NA values to NULL values using a
CASE expression:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 136/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.5.13 | TO_LONG
Description
TO_LONG is a row function that converts Text, Boolean, Integer, Long, Decimal, Date, or Double values to Long (whole number)
values. When converting Decimal or Double values, everything after the decimal will be truncated (not rounded up or down).
Syntax
TO_LONG(expression)
Return Value
Returns one value per row of type Long.
Input Parameters
expression
Required. A field or expression of type Text (must be numeric characters only, no period or comma), Boolean, Integer, Long,
Decimal, Date, or Double. When a Text field value includes a decimal, the function returns a NULL value.
Examples
Convert the values of the average_rating field to a Long field type:
TO_LONG([average_rating])
Convert the average_rating field to a Long field type, but first transform the occurrence of any NA values to NULL values using a
CASE expression:
1.17.5.14 | TO_STRING
Description
TO_STRING is a row function that converts values of other data types to Text (character) values.
Syntax
TO_STRING(expression)
TO_STRING(date_expression,date_format)
Return Value
Returns one value per row of type Text.
Input Parameters
expression
A field or expression of type Text, Boolean, Integer, Long, Numeric, Double, Instance, or Multi-Instance. When you convert
an Instance or Multi-Instance field to a string, this function returns the unique identifier (WID), not the display name, of the field
value. When a Multi-Instance field contains more than 1 value, this function concatenates each value into a single string with no
spaces.
date_expression
A field or expression of type Date.
date_format
If converting a Date to Date, a pattern that describes how the date is formatted. See TO_DATE for the date format patterns.
Examples
Convert the values of the sku_number field to a Text field type:
TO_STRING([sku_number])
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 137/200
3/11/23, 9:33 PM Workday® Administrator Guide
Convert values in the age column into range-based groupings (binning), and cast output values to a Text:
TO_STRING(CASE WHEN [age] <= 25 THEN "0-25" WHEN [age] <= 50 THEN "26-50" ELSE "over 50" END)
Convert the values of a timestamp Date field to Text, where the timestamp values are in the format of: 2002.07.10 at 15:08:56 PDT:
1.17.6.1 | DAYS_BETWEEN
Description
DAYS_BETWEEN is a row function that calculates the whole number of days (ignoring time) between two date values (value1 -
value2).
Syntax
DAYS_BETWEEN(date_1,date_2)
Return Value
Returns one value per row of type Integer.
Input Parameters
date_1
Required. A field or expression of type Date.
date_2
Required. A field or expression of type Date.
Examples
Calculate the number of days to ship a product by subtracting the value of the order_date field from the ship_date field:
DAYS_BETWEEN([ship_date],[order_date])
Calculate the number of days since a product's release by subtracting the value of the release_date field from the current date (the
result of the TODAY expression):
DAYS_BETWEEN(TODAY(),[release_date])
1.17.6.2 | DATE_ADD
Description
DATE_ADD is a row function that adds the specified time interval to a date value.
Syntax
DATE_ADD(date,quantity,"interval")
Return Value
Returns a value of type Date.
Input Parameters
date
Required. A field name or expression that returns a date value.
quantity
Required. An integer value. To add time intervals, use a positive integer. To subtract time intervals, use a negative integer.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 138/200
3/11/23, 9:33 PM Workday® Administrator Guide
interval
Required. One of the following time intervals:
Examples
Add 45 days to the value of the invoice_date field to calculate the date a payment is due:
DATE_ADD([invoice_date],45,"day")
1.17.6.3 | EXTRACT
Description
EXTRACT is a row function that returns the specified portion of a date value.
Syntax
EXTRACT("extract_value",date)
Return Value
Returns the specified extracted value as type Integer. EXTRACT removes leading zeros. For example, the month of April returns a
value of 4, not 04.
Input Parameters
extract_value
Required. One of the following extract values:
millisecond - Returns the millisecond portion of a date value. For example, an input date value of 2012-08-15
20:38:40.213 would return an integer value of 213.
second - Returns the second portion of a date value. For example, an input date value of 2012-08-15 20:38:40.213
would return an integer value of 40.
minute - Returns the minute portion of a date value. For example, an input date value of 2012-08-15 20:38:40.213
would return an integer value of 38.
hour - Returns the hour portion of a date value. For example, an input date value of 2012-08-15 20:38:40.213 would
return an integer value of 20.
day - Returns the day portion of a date value. For example, an input date value of 2012-08-15 would return an integer value
of 15 .
week - Returns the ISO week number for the input date value. For example, an input date value of 2012-01-02 would return
an integer value of 1 (the first ISO week of 2012 starts on Monday January 2). An input date value of 2012-01-01 would
return an integer value of 52 (January 1, 2012 is part of the last ISO week of 2011).
month - Returns the month portion of a date value. For example, an input date value of 2012-08-15 would return an integer
value of 8.
quarter - Returns the quarter number for the input date value, where quarters start on January 1, April 1, July 1, or October
1. For example, an input date value of 2012-08-15 would return a integer value of 3.
year - Returns the year portion of a date value. For example, an input date value of 2012-01-01 would return an integer
value of 2012.
weekyear - Returns the year value that corresponds to the ISO week number of the input date value. For example, an input
date value of 2012-01-02 would return an integer value of 2012 (the first ISO week of 2012 starts on Monday January 2).
An input date value of 2012-01-01 would return an integer value of 2011 (January 1, 2012 is part of the last ISO week of
2011).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 139/200
3/11/23, 9:33 PM Workday® Administrator Guide
date
Required. A field name or expression that returns a date value.
Examples
Extract the hour portion from the order_date Date field:
EXTRACT("hour",[order_date])
Cast the value of the order_date Text field to a date value using TO_DATE, and extract the ISO week year:
EXTRACT("weekyear",TO_DATE([order_date],"MM/dd/yyyy HH:mm:ss"))
1.17.6.4 | HOURS_BETWEEN
Description
HOURS_BETWEEN is a row function that calculates the whole number of hours (ignoring minutes, seconds, and milliseconds)
between two date values (value1 - value2).
Syntax
HOURS_BETWEEN(date_1,date_2)
Return Value
Returns one value per row of type Integer.
Input Parameters
date_1
Required. A field or expression of type Date.
date_2
Required. A field or expression of type Date.
Examples
Calculate the number of hours to ship a product by subtracting the value of the ship_date field from the order_date field:
HOURS_BETWEEN([ship_date],[order_date])
1.17.6.5 | MILLISECONDS_BETWEEN
Description
MILLISECONDS_BETWEEN is a row function that calculates the whole number of milliseconds between two date values (value1 -
value2).
Syntax
MILLISECONDS_BETWEEN(date_1,date_2)
Return Value
Returns one value per row of type Integer.
Input Parameters
date_1
Required. A field or expression of type Date.
date_2
Required. A field or expression of type Date.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 140/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
Calculate the number of milliseconds it took to serve a web page by subtracting the value of the request_timestamp field from the
response_timestamp field:
MILLISECONDS_BETWEEN([request_timestamp],[response_timestamp])
1.17.6.6 | MINUTES_BETWEEN
Description
MINUTES_BETWEEN is a row function that calculates the whole number of minutes (ignoring seconds and milliseconds) between
two date values (value1 - value2).
Syntax
MINUTES_BETWEEN(date_1,date_2)
Return Value
Returns one value per row of type Integer.
Input Parameters
date_1
Required. A field or expression of type Date.
date_2
Required. A field or expression of type Date.
Examples
Calculate the number of minutes it took for a user to click on an advertisement by subtracting the value of the
impression_timestamp field from the conversion_timestamp field:
MINUTES_BETWEEN([impression_timestamp],[conversion_timestamp])
1.17.6.7 | SECONDS_BETWEEN
Description
SECONDS_BETWEEN is a row function that calculates the whole number of seconds (ignoring milliseconds) between two date values
(value1 - value2).
Syntax
SECONDS_BETWEEN(date_1,date_2)
Return Value
Returns one value per row of type Integer.
Input Parameters
date_1
Required. A field or expression of type Date.
date_2
Required. A field or expression of type Date.
Examples
Calculate the number of seconds it took for a user to click on an advertisement by subtracting the value of the
impression_timestamp field from the conversion_timestamp field:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 141/200
3/11/23, 9:33 PM Workday® Administrator Guide
SECONDS_BETWEEN([impression_timestamp],[conversion_timestamp])
1.17.6.8 | TODAY
Description
TODAY is a scalar function that returns the current system date and time as a date value (no time information). It can be used in
other expressions involving Date type fields, such as YEAR_DIFF. Note that the value of TODAY is only evaluated at the time a
dataset is published (it is not re-evaluated with each query).
Syntax
TODAY()
Return Value
Returns the current system date (no time) as a date value.
Examples
Calculate a user's age using YEAR_DIFF to subtract the value of the birthdate field from the current date:
YEAR_DIFF(TODAY(), [birthdate])
Calculate the number of days since a product's release using DAYS_BETWEEN to subtract the value of the release_date field from the
current date:
DAYS_BETWEEN(TODAY(), [release_date])
1.17.6.9 | TRUNC
Description
TRUNC is a row function that truncates a date value to the specified format.
Syntax
TRUNC(date, "format")
Return Value
Returns a value of type Date truncated to the specified format.
Input Parameters
date
Required. A field or expression that returns a date value.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 142/200
3/11/23, 9:33 PM Workday® Administrator Guide
format
Required. One of the following format values:
millisecond - Returns a date value truncated to millisecond granularity. Has no effect since millisecond is already the
most granular format for date values. For example, an input date value of 2012-08-15 20:38:40.213 would return a date
value of 2012-08-15 20:38:40.213.
second - Returns a date value truncated to second granularity. For example, an input date value of 2012-08-15
20:38:40.213 would return a date value of 2012-08-15 20:38:40.000.
minute - Returns a date value truncated to minute granularity. For example, an input date value of 2012-08-15
20:38:40.213 would return a date value of 2012-08-15 20:38:00.000.
hour - Returns a date value truncated to hour granularity. For example, an input date value of 2012-08-15 20:38:40.213
would return a date value of 2012-08-15 20:00:00.000.
day - Returns a date value truncated to day granularity. For example, an input date value of 2012-08-15 20:38:40.213
would return a date value of 2012-08-15 00:00:00.000.
week - Returns a date value truncated to the first day of the week (starting on a Monday). For example, an input date value
of 2012-08-15 (a Wednesday) would return a date value of 2012-08-13 (the Monday prior).
month - Returns a date value truncated to the first day of the month. For example, an input date value of 2012-08-15 would
return a date value of 2012-08-01.
quarter - Returns a date value truncated to the first day of the quarter (January 1, April 1, July 1, or October 1). For
example, an input date value of 2012-08-15 20:38:40.213 would return a date value of 2012-07-01.
year - Returns a date value truncated to the first day of the year (January 1). For example, an input date value of 2012-08-
15 would return a date value of 2012-01-01.
weekyear - Returns a date value trucated to the first day of the ISO weekyear (the ISO week starting with the Monday which
is nearest in time to January 1). For example, an input date value of 2008-08-15 would return a date value of 2007-12-31.
The first day of the ISO weekyear for 2008 is December 31, 2007 (the prior Monday closest to January 1).
Examples
Truncate the order_date date field to day granularity:
TRUNC([order_date],"day")
Cast the value of the order_date Text field to a date value using TO_DATE, and truncate it to day granularity:
1.17.6.10 | YEAR_DIFF
Description
YEAR_DIFF is a row function that calculates the fractional number of years between two date values (value1 - value2).
Syntax
YEAR_DIFF(date_1,date_2)
Return Value
Returns one value per row of type Double.
Input Parameters
date_1
Required. A field or expression of type Date.
date_2
Required. A field or expression of type Date.
Examples
Calculate the number of years a user has been a customer by subtracting the value of the registration_date field from the current
date (the result of the TODAY expression):
YEAR_DIFF(TODAY(),[registration_date])
Calculate a user's age by subtracting the value of the birthdate field from the current date (the result of the TODAY expression):
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 143/200
3/11/23, 9:33 PM Workday® Administrator Guide
YEAR_DIFF(TODAY(),[birthdate])
1.17.7.1 | IS_VALID
Description
IS_VALID is a row function that returns 0 if the returned value is NULL, and 1 if the returned value is NOT NULL. This is useful for
computing other calculations where you want to exclude NULL values (such as when computing averages).
Syntax
IS_VALID(expression)
Return Value
Returns 0 if the returned value is NULL, and 1 if the returned value is NOT NULL.
Input Parameters
expression
Required. A field name or expression.
Examples
Define a Prism calculated field using IS_VALID. This returns a row count only for the rows where this field value is NOT NULL. If a
value is NULL, it returns 0 for that row. In this example, we create a Prism calculated field (sale_amount_not_null) using the
sale_amount field as the basis.
IS_VALID([sale_amount])
Then you can use the sale_amount_not_null Prism calculated field to calculate an accurate average for sale_amount that excludes
NULL values:
SUM([sale_amount])/SUM([sale_amount_not_null])
1.17.8.1 | CREATE_MULTI_INSTANCE
Description
CREATE_MULTI_INSTANCE is a row function that constructs a Multi-Instance field from one or more provided Multi-Instance or
Instance fields.
Syntax
CREATE_MULTI_INSTANCE(field_name [, field_name])
Return Value
Returns one value per row of type Multi-Instance.
Input Parameters
field_name
Required. A field of type Multi-Instance or Instance. All instance values must use the same business object.
Examples
Create a Multi-Instance field out of multiple Instance fields:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 144/200
3/11/23, 9:33 PM Workday® Administrator Guide
CREATE_MULTI_INSTANCE(
CAST("070b0d082eee44e1928c808cc739b35f" AS Instance(eecb565181284b6a8ae8b45dc3ed1451)),
CAST("f4c49debb3dc483baa8707dfe683503c" AS Instance(eecb565181284b6a8ae8b45dc3ed1451))
)
1.17.8.2 | INSTANCE_CONTAINS_ANY
Description
INSTANCE_CONTAINS_ANY is a row function that compares a Multi-Instance or Instance field to either a Multi-Instance field, an
Instance field, or to a list of instance values, and returns True if at least one instance value exists in the first argument, and False if
none of them exist.
Syntax
INSTANCE_CONTAINS_ANY(input_field, comparison_value , [comparison_value])
Return Value
Returns one value per row of type Boolean. This function returns NULL when it receives a Text value that isn't formatted as a valid
instance value (WID format).
Input Parameters
input_field
Required. A field of type Multi-Instance or Instance.
comparison_value
Required. A field of type Multi-Instance or Instance, or a Text value of a valid instance value.
Examples
Compare the Worktags Multi-Instance field to the Instance fields Cost Center 1 and Cost Center 2:
1.17.8.3 | INSTANCE_COUNT
Description
INSTANCE_COUNT is a row function that returns the total number of instance values in a Multi-Instance or Instance field. This
function returns 0 when the field is empty.
Syntax
INSTANCE_COUNT(field_name)
Return Value
Returns one value per row of type Integer.
Input Parameters
field_name
Required. A field of type Multi-Instance or Instance.
Examples
Count the number of instance values in the Journal Lines Multi-Instance field:
INSTANCE_COUNT([Journal Lines])
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 145/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.8.4 | INSTANCE_EQUALS
Description
INSTANCE_EQUALS is a row function that compares a Multi-Instance or Instance field to either a Multi-Instance field, an Instance
field, or to a list of instance values, and checks if the first argument exactly matches the instance values provided in the other
arguments.
Syntax
INSTANCE_EQUALS(input_field, comparison_value [, comparison_value])
Return Value
Returns one value per row of type Boolean. This function returns NULL when it receives a Text value that isn't formatted as a valid
instance value (WID format).
Input Parameters
input_field
Required. A field of type Multi-Instance or Instance.
comparison_value
Required. A field of type Multi-Instance or Instance, or a Text value of a valid instance value.
Examples
Compare the Worktags Multi-Instance field to the Instance fields Cost Center 1 and Cost Center 2:
1.17.8.5 | INSTANCE_IS_SUPERSET_OF
Description
INSTANCE_IS_SUPERSET_OF is a row function that compares a Multi-Instance field to either a Multi-Instance field, an Instance field,
or to a list of instance values, and returns True if every instance value exists in the first argument, and False if at least one doesn't
exist.
Syntax
INSTANCE_IS_SUPERSET_OF(input_field, comparison_value [, comparison_value])
Return Value
Returns one value per row of type Boolean. This function returns NULL when it receives a Text value that isn't formatted as a valid
instance value (WID format).
Input Parameters
input_field
Required. A field of type Multi-Instance.
comparison_value
Required. A field of type Multi-Instance or Instance, or a Text value of a valid instance value.
Examples
Compare the Worktags Multi-Instance field to the Instance fields Cost Center 1 and Cost Center 2:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 146/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.9.1 | CASE
Description
CASE is a row function that evaluates each row in the dataset according to one or more input conditions, and outputs the specified
result when the input conditions are met.
Syntax
CASE WHEN input_condition [AND|OR input_condition]THEN output_expression [...] [ELSE
other_output_expression] END
Return Value
Returns one value per row of the same type as the output expression. All output expressions must return the same field type.
If there are multiple output expressions that return different field types, then you will need to enclose your entire CASE expression in
one of the field type conversion functions, such as TO_INT, to explicitly cast all output values to a particular field type.
Input Parameters
WHEN input_condition
Required. The WHEN keyword is used to specify one or more Boolean expressions (see the supported conditional operators). If an
input value meets the condition, then the output expression is applied. Input conditions can include other row functions in their
expression, but cannot contain summarization functions or measure expressions. You can use the AND or OR keywords to combine
multiple input conditions.
THEN output_expression
Required. The THEN keyword is used to specify an output expression when the specified conditions are met. Output expressions
can include other row functions in their expression, but cannot contain summarization functions or measure expressions.
ELSE other_output_expression
Optional. The ELSE keyword can be used to specify an alternate output expression to use when the specified conditions are not
met. If an ELSE expression is not supplied, ELSE NULL is the default.
END
Required. Denotes the end of CASE function processing.
Examples
Convert values in the age column into range-based groupings (binning):
CASE WHEN [age] <= 25 THEN "0-25" WHEN [age] <= 50 THEN "26-50" ELSE "over 50" END
CASE WHEN [gender] = "M" THEN "Male" WHEN [gender] = "F" THEN "Female" ELSE "Unknown" END
The vehicle column contains the following values: truck, bus, car, scooter, wagon, bike, tricycle, and motorcycle. The following
example converts multiple values in the vehicle column into a single value:
Related Information
Reference
Comparison Operators
Logical Operators
Arithmetic Operators
1.17.9.2 | COALESCE
Description
COALESCE is a row function that returns the first valid value (NOT NULL value) from a comma-separated list of expressions.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 147/200
3/11/23, 9:33 PM Workday® Administrator Guide
Syntax
COALESCE(expression[,expression][,...])
Return Value
Returns one value per row of the same type as the first valid input expression.
Input Parameters
expression
At least one required. A field name or expression.
Examples
The following example shows an expression to calculate employee yearly income for exempt employees that have a salary and
non-exempt employees that have an hourly_wage. This expression checks the values of both fields for each row, and returns the
value of the first expression that is valid (NOT NULL).
1.17.10.1 | DIV
Description
DIV is a row function that divides two Long values and returns a quotient value of type Long (the result is truncated to 0 decimal
places).
Syntax
DIV(dividend,divisor)
Return Value
Returns one value per row of type Long.
Input Parameters
dividend
Required. A field or expression of type Long.
divisor
Required. A field or expression of type Long.
Examples
Cast the value of the file_size field to Long and divide by 1024:
DIV(TO_LONG([file_size]),1024)
1.17.10.2 | EXP
Description
EXP is a row function that raises the mathematical constant e to the power (exponent) of a numeric value and returns a value of
type Double.
Syntax
EXP(power)
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 148/200
3/11/23, 9:33 PM Workday® Administrator Guide
Return Value
Returns one value per row of type Double.
Input Parameters
power
Required. A field or expression of a numeric type.
Examples
Raise e to the power in the Value field.
EXP([Value])
When the Value field value is 2.0, the result is equal to 7.3890 when truncated to four decimal places.
1.17.10.3 | FLOOR
Description
FLOOR is a row function that returns the largest integer that is less than or equal to the input argument.
Syntax
FLOOR(double)
Return Value
Returns one value per row of type Double.
Input Parameters
double
Required. A field or expression of type Double.
Examples
Return the floor value of 32.6789:
1.17.10.4 | HASH
Description
HASH is a row function that evenly partitions data values into the specified number of buckets. It creates a hash of the input value
and assigns that value a bucket number. Equal values will always hash to the same bucket number.
Syntax
HASH(field_name,integer)
Return Value
Returns one value per row of type Integer corresponding to the bucket number that the input value hashes to.
Input Parameters
field_name
Required. The name of the field whose values you want to partition. When this value is NULL and the integer parameter is a value
other than zero or NULL, the function returns zero, otherwise it returns NULL.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 149/200
3/11/23, 9:33 PM Workday® Administrator Guide
integer
Required. The desired number of buckets. This parameter can be a numeric value of any field type, but when it is a non-integer
value, the value is truncated to an integer. When the value is zero or NULL, the function returns NULL. When the value is negative,
the function uses absolute value.
Examples
Partition the values of the username field into 20 buckets:
HASH([username],20)
1.17.10.5 | LN
Description
LN is a row function that returns the natural logarithm of a number. The natural logarithm is the logarithm to the base e, where e
(Euler's number) is a mathematical constant approximately equal to 2.718281828. The natural logarithm of a number x is the
power to which the constant e must be raised in order to equal x.
Syntax
LN(positive_number)
Return Value
Returns the exponent to which base e must be raised to obtain the input value, where e denotes the constant number 2.718281828.
The return value is the same field type as the input value.
Input Parameters
positive_number
Required. A field or expression that returns a number greater than 0. Inputs can be of type Integer, Long, Double.
Examples
Return the natural logarithm of base number e, which is approximately 2.718281828:
LN(2.718281828) returns 1
1.17.10.6 | MOD
Description
MOD is a row function that divides two Long values and returns the remainder value of type Long (the result is truncated to 0
decimal places).
Syntax
MOD(dividend,divisor)
Return Value
Returns one value per row of type Long.
Input Parameters
dividend
Required. A field or expression of type Long.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 150/200
3/11/23, 9:33 PM Workday® Administrator Guide
divisor
Required. A field or expression of type Long.
Examples
Cast the value of the file_size field to Long and divide by 1024:
MOD(TO_LONG([file_size]),1024)
1.17.10.7 | POW
Description
POW is a row function that raises the a numeric value to the power (exponent) of another numeric value and returns a value of type
Double.
Syntax
POW(index,power)
Return Value
Returns one value per row of type Double.
Input Parameters
index
Required. A field or expression of a numeric type.
power
Required. A field or expression of a numeric type.
Examples
Calculate the compound annual growth rate (CAGR) percentage for a given investment over a five year span.
POW([Value],2)
POW([Value],0.5)
POW(0,0)
1.17.10.8 | ROUND
Description
ROUND is a row function that rounds a numeric value to the specified number of decimal places and returns a value of type Double.
Syntax
ROUND(numeric_expression,number_decimal_places)
Return Value
Returns one value per row of type Double.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 151/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input Parameters
numeric_expression
Required. A field or expression of any numeric type.
number_decimal_places
Required. An integer that specifies the number of decimal places to round to.
Examples
Round the number 32.4678954 to two decimal places:
1.17.11.1 | CIDR_MATCH
Description
CIDR_MATCH is a row function that compares two Text arguments representing a CIDR mask and an IP address, and returns 1 if the
IP address falls within the specified subnet mask or 0 if it does not.
Syntax
CIDR_MATCH(CIDR_string, IP_string)
Return Value
Returns an Integer value of 1 if the IP address falls within the subnet indicated by the CIDR mask and 0 if it does not.
Input Parameters
CIDR_string
Required. A field or expression that returns a Text value containing either an IPv4 or IPv6 CIDR mask (Classless InterDomain
Routing subnet notation). An IPv4 CIDR mask can only successfully match IPv4 addresses, and an IPv6 CIDR mask can only
successfully match IPv6 addresses.
IP_string
Required. A field or expression that returns a Text value containing either an IPv4 or IPv6 internet protocol (IP) address.
Examples
Compare an IPv4 CIDR subnet mask to an IPv4 IP address:
CIDR_MATCH("60.145.56.0/24","60.145.56.246") returns 1
CIDR_MATCH("60.145.56.0/30","60.145.56.246") returns 0
CIDR_MATCH("fe80::/70","FE80::0202:B3FF:FE1E:8329") returns 1
CIDR_MATCH("fe80::/72","FE80::0202:B3FF:FE1E:8329") returns 0
1.17.11.2 | CONCAT
Description
CONCAT is a row function that returns a Text by concatenating (combining together) the results of multiple Text expressions.
Syntax
CONCAT(value_expression[,value_expression][,...])
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 152/200
3/11/23, 9:33 PM Workday® Administrator Guide
Return Value
Returns one value per row of type Text.
Input Parameters
value_expression
At least one required. A field name of any type, a literal string or number, or an expression that returns any value.
Examples
Combine the values of the month, day, and year fields into a single date field formatted as MM/DD/YYYY.
CONCAT([month],"/",[day],"/",[year])
1.17.11.3 | EXTRACT_COOKIE
Description
EXTRACT_COOKIE is a row function that extracts the value of the given cookie identifier from a semi-colon delimited list of cookie
key/value pairs. This function can be used to extract a particular cookie value from a combined web access log Cookie column.
Syntax
EXTRACT_COOKIE("cookie_list_string",cookie_key_string)
Return Value
Returns the value of the specified cookie key as type Text.
Input Parameters
cookie_list_string
Required. A field of type Text or literal string that has a semi-colon delimited list of cookie key=value pairs.
cookie_key_string
Required. The cookie key name for which to extract the cookie value.
Examples
Extract the value of the vID cookie from a literal cookie string:
Extract the value of the vID cookie from a field named Cookie:
EXTRACT_COOKIE([Cookie],"vID")
1.17.11.4 | EXTRACT_VALUE
Description
EXTRACT_VALUE is a row function that extracts the value for the given key from a string containing delimited key/value pairs.
Syntax
EXTRACT_VALUE(string,key_name [,delimiter], [pair_delimiter])
Return Value
Returns the value of the specified key as type Text.
Input Parameters
string
Required. A field of type Text or literal string that contains a delimited list of key/value pairs.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 153/200
3/11/23, 9:33 PM Workday® Administrator Guide
key_name
Required. The key name for which to extract the value.
delimiter
Optional. The delimiter used between the key and the value. If not specified, the value u0003 is used. This is the Unicode escape
sequence for the start of text character.
pair_delimiter
Optional. The delimiter used between key/value pairs when the input string contains more than one key/value pair. If not specified,
the value u0002 is used. This is the Unicode escape sequence for the end of text character.
Examples
Extract the value of the lastname key from a literal string of key/value pairs:
Extract the value of the email key from a Text field named contact_info that contains strings in the format of key:value,key:value:
EXTRACT_VALUE([contact_info],"email",":",",")
Related Information
Reference
PACK_VALUES
1.17.11.5 | FILE_NAME
Description
FILE_NAME is a row function that returns the original file name from the source file system. This is useful when the source data that
comprises a dataset comes from multiple files, and there is useful information in the file names themselves (such as dates or
server names). You can use FILE_NAME in combination with other text processing functions to extract useful information from the
file name.
Syntax
FILE_NAME()
Return Value
Returns one value per row of type Text.
Examples
Your dataset is based on daily log files that use an 8 character date as part of the file name. For example, 20120704.log is the file
name used for the log file created on July 4, 2012. The following expression uses FILE_NAME in combination with SUBSTRING and
TO_DATE to create a date field from the first 8 characters of the file name.
TO_DATE(SUBSTRING(FILE_NAME(),0,8),"yyyyMMdd")
Your dataset is based on log files that use the server IP address as part of the file name. For example, 172.12.131.118.log is the log
file name for server 172.12.131.118. The following expression uses FILE_NAME in combination with REGEX to extract the IP address
from the file name.
REGEX(FILE_NAME(),"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.log")
1.17.11.6 | HEX_TO_IP
Description
HEX_TO_IP is a row function that converts a hexadecimal-encoded Text value to a text representation of an IP address.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 154/200
3/11/23, 9:33 PM Workday® Administrator Guide
Syntax
HEX_TO_IP(string)
Return Value
Returns a value of type Text representing either an IPv4 or IPv6 address. The type of IP address returned depends on the input
string. An 8 character hexadecimal string returns an IPv4 address. A 32 character long hexadecimal string returns an IPv6 address.
IPv6 addresses are represented in full length, without removing any leading zeros and without using the compressed :: notation.
For example, 2001:0db8:0000:0000:0000:ff00:0042:8329 rather than 2001:db8::ff00:42:8329.
Input strings that don't contain either 8 or 32 valid hexadecimal characters return NULL.
Input Parameters
string
Required. A field or expression that returns a hexadecimal-encoded Text value. The hexadecimal string must be either 8 characters
long (in which case it's converted to an IPv4 address) or 32 characters long (in which case it's converted to an IPv6 address).
Examples
Return a plain text IP address for each hexadecimal-encoded string value in the byte_encoded_ips column:
HEX_TO_IP([byte_encoded_ips])
1.17.11.7 | INSTR
Description
INSTR is a row function that returns an integer indicating the position of a character within a string that is the first character of the
occurrence of a substring. The INSTR function is similar to the FIND function in Excel, except that the first letter is position 0 and
the order of the arguments is reversed.
Syntax
INSTR(search_string,substring,position,occurrence)
Return Value
Returns one value per row of type Integer. The first position is indicated with the value of zero (0).
Input Parameters
search_string
Required. The name of a field or expression of type Text (or a literal string).
substring
Required. A literal string or name of a field that specifies the substring to search for in search_string. Note that to search for the
double quotation mark ( " ) as a literal string, you must escape it with another double quotation mark: ""
position
Optional. An integer that specifies at which character in search_string to start searching for substring. A value of 0 (zero) starts the
search at the beginning of search_string. Use a positive integer to start searching from the beginning of search_string, and use a
negative integer to start searching from the end of search_string. When no position is specified, INSTR searches at the beginning of
the string (0).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 155/200
3/11/23, 9:33 PM Workday® Administrator Guide
occurrence
Optional. A positive integer that specifies which occurrence of substring to search for. When no occurrence is specified, INSTR
searches for the first occurrence of the substring (1).
Examples
Return the position of the first occurrence of the substring "http://" starting at the end of the url field:
INSTR([url],"http://",-1,1)
The following expression searches for the second occurrence of the substring "st" starting at the beginning of the string
"bestteststring". INSTR finds that the substring starts at the seventh character in the string, so it returns 6:
INSTR("bestteststring","st",0,2)
The following expression searches backward for the second occurrence of the substring "st" starting at 7 characters before the end
of the string "bestteststring". INSTR finds that the substring starts at the third character in the string, so it returns 2:
INSTR("bestteststring","st",-7,2)
1.17.11.8 | JAVA_STRING
Description
JAVA_STRING is a row function that returns the unescaped version of a Java unicode character escape sequence as a Text value.
This is useful when you want to specify unicode characters in an expression. For example, you can use JAVA_STRING to specify the
unicode value representing a control character.
Syntax
JAVA_STRING(unicode_escape_sequence)
Return Value
Returns the unescaped version of the specified unicode character, one value per row of type Text.
Input Parameters
unicode_escape_sequence
Required. A Text value containing a unicode character expressed as a Java unicode escape sequence. Unicode escape sequences
consist ofa backslash '\' (ASCII character 92, hex 0x5c), a 'u' (ASCII 117, hex 0x75), optionally one or more additional 'u' characters,
and four hexadecimal digits (the characters '0' through '9' or 'a' through 'f' or 'A' through 'F'). Such sequences represent the UTF-16
encoding of a Unicode character. For example, the letter 'a' is equivalent to '\u0061'.
Examples
Evaluates whether the currency field is equal to the yen symbol.
1.17.11.9 | JOIN_STRINGS
Description
JOIN_STRINGS is a row function that returns a Text by concatenating (combining together) the results of multiple Text values with
the separator in between each non-null value.
Syntax
JOIN_STRINGS(separator,value_expression, [value_expression][,...])
Return Value
Returns one value per row of type Text.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 156/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input Parameters
separator
Required. A field name of type Text, a literal string, or an expression that returns a Text.
value_expression
At least one required. A field name of any type, a literal string or number, or an expression that returns any value.
Examples
Combine the values of the month, day, and year fields into a single date field formatted as MM/DD/YYYY.
JOIN_STRINGS("/",[month],[day],[year])
JOIN_STRINGS("+",NULL,NULL,NULL)
JOIN_STRINGS("+","a","b",NULL)
1.17.11.10 | JSON_DECIMAL
Description
JSON_DECIMAL is a row function that extracts a Numeric value from a field in a JSON object.
Syntax
JSON_DECIMAL(json_string, "json_field")
Return Value
Returns one value per row of type Numeric.
Input Parameters
json_string
Required. The name of a field or expression of type Text (or a literal string) that contains a valid JSON object.
json_field
Required. The key or name of the field value you want to extract.
For top-level fields, specify the name identifier (key) of the field.
To access fields within a nested object, specify a dot-separated path of field names (for example
top_level_field_name.nested_field_name).
To extract a value from an array, specify the dot-separated path of field names and the array position starting at 0 for the first value
in an array, 1 for the second value, and so on (for example, field_name.0).
If the name identifier contains dot or period characters within the name itself, escape the name by enclosing it in brackets (for
example, [field.name.with.dot].[another.dot.field.name]
If the field name is null (empty), use brackets with nothing in between as the identifier, for example [].
Examples
If you had a top_scores field that contained a JSON object formatted like this (with the values contained in an array):
{"practice_scores":["538.67","674.99","1021.52"], "test_scores":["753.21","957.88","1032.87"]}
You could extract the third value of the test_scores array using the expression, which returns "1032.87":
JSON_DECIMAL([top_scores],"test_scores.2")
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 157/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.11.11 | JSON_DOUBLE
Description
JSON_DOUBLE is a row function that extracts a Double value from a field in a JSON object.
Syntax
JSON_DOUBLE(json_string, "json_field")
Return Value
Returns one value per row of type Double.
Input Parameters
json_string
Required. The name of a field or expression of type Text (or a literal string) that contains a valid JSON object.
json_field
Required. The key or name of the field value you want to extract.
For top-level fields, specify the name identifier (key) of the field.
To access fields within a nested object, specify a dot-separated path of field names (for example
top_level_field_name.nested_field_name).
To extract a value from an array, specify the dot-separated path of field names and the array position starting at 0 for the first value
in an array, 1 for the second value, and so on (for example, field_name.0).
If the name identifier contains dot or period characters within the name itself, escape the name by enclosing it in brackets (for
example, [field.name.with.dot].[another.dot.field.name]
If the field name is null (empty), use brackets with nothing in between as the identifier, for example [].
Examples
If you had a top_scores field that contained a JSON object formatted like this (with the values contained in an array):
{"practice_scores":["538.67","674.99","1021.52"], "test_scores":["753.21","957.88","1032.87"]}
You could extract the third value of the test_scores array using the expression, which returns "1032.87":
JSON_DOUBLE([top_scores],"test_scores.2")
1.17.11.12 | JSON_INTEGER
Description
JSON_INTEGER is a row function that extracts an Integer value from a field in a JSON object.
Syntax
JSON_INTEGER(json_string, "json_field")
Return Value
Returns one value per row of type Integer.
Input Parameters
json_string
Required. The name of a field or expression of type Text (or a literal string) that contains a valid JSON object.
json_field
Required. The key or name of the field value you want to extract.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 158/200
3/11/23, 9:33 PM Workday® Administrator Guide
For top-level fields, specify the name identifier (key) of the field.
To access fields within a nested object, specify a dot-separated path of field names (for example
top_level_field_name.nested_field_name).
To extract a value from an array, specify the dot-separated path of field names and the array position starting at 0 for the first value
in an array, 1 for the second value, and so on (for example, field_name.0).
If the name identifier contains dot or period characters within the name itself, escape the name by enclosing it in brackets (for
example, [field.name.with.dot].[another.dot.field.name]
If the field name is null (empty), use brackets with nothing in between as the identifier, for example [].
Examples
If you had an address field that contained a JSON object formatted like this:
You could extract the zip_code value using the expression, which returns "94403":
JSON_INTEGER([address],"zip_code")
If you had a top_scores field that contained a JSON object formatted like this (with the values contained in an array):
{"practice_scores":["538","674","1021"], "test_scores":["753","957","1032"]}
You could extract the third value of the test_scores array using the expression, which returns "1032":
JSON_INTEGER([top_scores],"test_scores.2")
1.17.11.13 | JSON_LONG
Description
JSON_LONG is a row function that extracts a Long value from a field in a JSON object.
Syntax
JSON_LONG(json_string, "json_field")
Return Value
Returns one value per row of type Long.
Input Parameters
json_string
Required. The name of a field or expression of type Text (or a literal string) that contains a valid JSON object.
json_field
Required. The key or name of the field value you want to extract.
For top-level fields, specify the name identifier (key) of the field.
To access fields within a nested object, specify a dot-separated path of field names (for example
top_level_field_name.nested_field_name).
To extract a value from an array, specify the dot-separated path of field names and the array position starting at 0 for the first value
in an array, 1 for the second value, and so on (for example, field_name.0).
If the name identifier contains dot or period characters within the name itself, escape the name by enclosing it in brackets (for
example, [field.name.with.dot].[another.dot.field.name]
If the field name is null (empty), use brackets with nothing in between as the identifier, for example [].
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 159/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
If you had a top_scores field that contained a JSON object formatted like this (with the values contained in an array):
{"practice_scores":["538","674","1021"], "test_scores":["753","957","1032"]}
You could extract the third value of the test_scores array using the expression, which returns "1032":
JSON_LONG([top_scores],"test_scores.2")
1.17.11.14 | JSON_STRING
Description
JSON_STRING is a row function that extracts a Text value from a field in a JSON object.
Syntax
JSON_STRING(json_string, "json_field")
Return Value
Returns one value per row of type Text.
Input Parameters
json_string
Required. The name of a field or expression of type Text (or a literal string) that contains a valid JSON object.
json_field
Required. The key or name of the field value you want to extract.
For top-level fields, specify the name identifier (key) of the field.
To access fields within a nested object, specify a dot-separated path of field names (for example
top_level_field_name.nested_field_name).
To extract a value from an array, specify the dot-separated path of field names and the array position starting at 0 for the first value
in an array, 1 for the second value, and so on (for example, field_name.0).
If the name identifier contains dot or period characters within the name itself, escape the name by enclosing it in brackets (for
example, [field.name.with.dot].[another.dot.field.name]
If the field name is null (empty), use brackets with nothing in between as the identifier, for example [].
Examples
If you had an address field that contained a JSON object formatted like this:
JSON_STRING([address],"state")
If you had a misc field that contained a JSON object formatted like this (with the values contained in an array):
{"hobbies":["sailing","hiking","cooking"], "interests":["art","music","travel"]}
You could extract the first value of the hobbies array using the expression, which returns "sailing":
JSON_STRING([misc],"hobbies.0")
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 160/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.11.15 | LENGTH
Description
LENGTH is a row function that returns the count of characters in a Text value.
Syntax
LENGTH(string_expression)
Return Value
Returns one value per row of type Integer.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
Examples
Return count of characters from values in the name field. For example, the value Bob would return a length of 3, Julie would return a
length of 5, and so on:
LENGTH([name])
1.17.11.16 | PACK_VALUES
Description
PACK_VALUES is a row function that returns multiple output values packed into a single string of key/value pairs separated by the
default key and pair separators. The string returned is in a format that can be read by the EXTRACT_VALUE function. PACK_VALUES
uses the same key and pair separator values that EXTRACT_VALUE uses (the Unicode escape sequences u0003 and u0002,
respectively).
Syntax
PACK_VALUES(key,value[,key,value][,...])
Return Value
Returns one value per row of type Text. If the value for either key or value of a pair is null or contains either of the separator
values, the full key/value pair is omitted from the return value.
The key separator is u0003, which is the Unicode escape sequence for the start of text character. The pair separator is u0002,
which is the Unicode escape sequence for the end of text character.
Input Parameters
key
At least one required. A field name of any type, a literal string or number, or an expression that returns any value.
value
At least one required. A field name of any type, a literal string or number, or an expression that returns any value. The expression
must include one value instance for each key instance.
Examples
Combine the values of the [custid] and [age] fields into a single text field.
This expression returns ID\u00035555\u0002Age\u000329 when the value of the [custid] field is 5555 and the value of the [age]
field is 29:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 161/200
3/11/23, 9:33 PM Workday® Administrator Guide
This expression returns Age\u000329 when the value of the [age] field is 29:
This expression returns 29 as a Text value when the [age] field is an Integer and its value is 29:
Related Information
Reference
EXTRACT_VALUE
1.17.11.17 | REGEX
Description
REGEX is a row function that performs a whole string match against a Text value with a regular expression and returns the portion
of the string matching the first capturing group of the regular expression.
Syntax
REGEX(string_expression,"regex_matching_pattern")
Return Value
Returns the matched Text value of the first capturing group of the regular expression. If there is no match, returns NULL.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
regex_matching_pattern
Required. A regular expression pattern based on the regular expression pattern matching syntax of the Java programming
language. To return a non-NULL value, the regular expression pattern must match the entire Text value.
(a(b*))+(c)
group 1: (a(b*))
group 2: (b*)
group 3: (c)
Capturing Groups
By default, a group captures the text that produces a match, and only the most recent match is captured. The REGEX function
returns the string that matches the first capturing group in the regular expression. For example, if the input string to the expression
above was abc, the entire REGEX function would match to abc, but only return the result of group 1, which is ab.
Non-Capturing Groups
In some cases, you may want to use parenthesis to group subpatterns, but not capture text. A non-capturing group starts with (?:
(a question mark and colon following the opening parenthesis). For example, h(?:a|i|o)t matches hat or hit or hot, but does
not capture the a, i, or o from the subexpression.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 162/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
Match all possible email addresses with a pattern of username@provider.domain, but only return the provider portion of the email
address from the email field:
REGEX([email],"^[a-zA-Z0-9._%+-]+@([a-zA-Z0-9._-]+)\.[a-zA-Z]{2,4}$")
Match the request line of a web log, where the value is in the format of:
REGEX(weblog.request_line,"GET\s/([a-zA-Z0-9._%-]+\.[html])\sHTTP/[0-9.]+")
Extract the inches portion from a height field where example values are 6'2", 5'11" (notice the escaping of the literal quote with
a double double-quote):
REGEX([height], "\d\'(\d)+""")
Extract all of the contents of the device field when the value is either iPod, iPad, or iPhone:
REGEX([device],"(iP[ao]d|iPhone)")
Related Information
Concepts
Concept: Regular Expressions in Prism
Reference
Regex Literal and Special Characters
Regex Character Classes
Regex Line and Word Boundaries
Regex Quantifiers
Regex Capturing Groups
1.17.11.18 | REGEX_REPLACE
Description
REGEX_REPLACE is a row function that evaluates a Text value against a regular expression to determine if there is a match, and
replaces matched strings with the specified replacement value.
Syntax
REGEX_REPLACE(string_expression,"regex_match_pattern","regex_replace_pattern")
Return Value
Returns the regex_replace_pattern as a Text value when regex_match_pattern produces a match. If there is no match,
returns the value of string_expression as a Text.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
regex_match_pattern
Required. A string literal or regular expression pattern based on the regular expression pattern matching syntax of the Java
programming language. You can use capturing groups to create backreferences that can be used in the regex_replace_pattern
. You might want to use a string literal to make a case-sensitive match. For example, when you enter jane as the match value, the
function matches jane but not Jane. The function matches all occurrences of a string literal in the string expression.
regex_replace_pattern
Required. A string literal or regular expression pattern based on the regular expression pattern matching syntax of the Java
programming language. You can refer to backreferences from the regex_match_pattern using the syntax $n (where n is the
group number).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 163/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
Match the values in a phone_number field where phone number values are formatted as xxx.xxx.xxxx and replace them with
phone number values formatted as (xxx) xxx-xxxx:
REGEX_REPLACE([phone_number],"([0-9]{3})\.([[0-9]]{3})\.([[0-9]]{4})","\($1\) $2-$3")
Match the values in a name field where name values are formatted as firstname lastname and replace them with name values
formatted as lastname, firstname:
Match the string literal mrs in a title field and replace it with the string literal Mrs.
REGEX_REPLACE([title],"mrs","Mrs")
Related Information
Concepts
Concept: Regular Expressions in Prism
Reference
Regex Literal and Special Characters
Regex Character Classes
Regex Line and Word Boundaries
Regex Quantifiers
Regex Capturing Groups
1.17.11.19 | REVERSE
Description
REVERSE is a row function that returns the characters of a string value in the opposite order.
Syntax
REVERSE(string_expression)
Return Value
Returns one value per row of type Text.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
Examples
Return the string 123 Main Street in reverse order:
1.17.11.20 | SUBSTRING
Description
SUBSTRING is a row function that returns the specified characters of a Text value based on the given start and optional end
position.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 164/200
3/11/23, 9:33 PM Workday® Administrator Guide
Syntax
SUBSTRING(search_string,start,end)
Return Value
Returns one value per row of type Text.
Input Parameters
search_string
Required. The name of a field or expression of type Text (or a literal string).
start
Required. An integer that specifies where the returned characters start (inclusive), with 0 being the first character of the string. If
start is greater than the number of characters, then an empty string is returned. If start is greater than end, then an empty string is
returned.
end
Optional. A positive integer that specifies where the returned characters end (exclusive), with the end character not being part of
the return value. If end is greater than the number of characters, or is not specified, then the whole string value (from start) is
returned.
Examples
Return the first letter of the name field:
SUBSTRING([name],0,1)
1.17.11.21 | TO_LOWER
Description
TO_LOWER is a row function that converts all alphabetic characters in a Text value to lower case.
Syntax
TO_LOWER(string_expression)
Return Value
Returns one value per row of type Text.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
Examples
Return the literal input string 123 Main Street in all lower case letters:
1.17.11.22 | TO_PROPER
Description
TO_PROPER is a row function that returns a Text value with the first letter of each word capitalized.
Syntax
TO_PROPER(string_expression)
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 165/200
3/11/23, 9:33 PM Workday® Administrator Guide
Return Value
Returns one value per row of type Text.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
Examples
TO_PROPER("123 Alameda de las Pulgas, San Mateo CA")
1.17.11.23 | TO_UPPER
Description
TO_UPPER is a row function that converts all alphabetic characters in a Text value to upper case.
Syntax
TO_UPPER(string_expression)
Return Value
Returns one value per row of type Text.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
Examples
TO_UPPER("123 Main Street")
1.17.11.24 | TRIM
Description
TRIM is a row function that removes leading and trailing spaces from a Text value.
Syntax
TRIM(string_expression)
Return Value
Returns one value per row of type Text.
Input Parameters
string_expression
Required. The name of a field or expression of type Text (or a literal string).
Examples
Return the value of the area_code field without any leading or trailing spaces. Example:
TRIM([area_code])
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 166/200
3/11/23, 9:33 PM Workday® Administrator Guide
TRIM(" 650 123-4567 ") returns 650 123-4567. Note that the extra spaces in the middle of the string aren't removed, only the
spaces at the beginning and end of the string.
1.17.11.25 | XPATH_STRING
Description
XPATH_STRING is a row function that takes XML and returns the first string matching the given XPath expression.
Syntax
XPATH_STRING(xml_expression,"xpath_expression")
Return Value
Returns one value per row of type Text.
If the XPath expression matches more than one string in the given XML node, this function will return the first match only. To return
all matches, use XPATH_STRINGS instead.
Input Parameters
xml_expression
Required. The name of a field of type Text or a literal string that contains a valid XML node (a snippet of XML consisting of a
parent element and one or more child nodes).
xpath_expression
Required. An XPath expression that refers to a node, element, or attribute within the XML string passed to this expression. Any
XPath expression that complies to the XML Path Language (XPath) Version 1.0 specification is valid.
Examples
These example XPATH_STRING expressions assume you have a field in your dataset named address that contains XML-formatted
strings such as this:
<list>
<address type="work">
<street1>1300 So. El Camino Real</street1>
<street2>Suite 600</street2>
<city>San Mateo</city>
<state>CA</state>
<zipcode>94403</zipcode>
</address>
<address type="home">
<street1>123 Oakdale Street</street1>
<street2/>
<city>San Francisco</city>
<state>CA</state>
<zipcode>94123</zipcode>
</address>
</list>
Get the zipcode value from any address element where the type attribute equals home:
XPATH_STRING([address],"//address[@type='home']/zipcode")
returns: 94123
XPATH_STRING([address],"/list/address[2]/city")
Get the values from all child elements of the first address element (as one string):
XPATH_STRING([address],"/list/address")
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 167/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.12.1 | URL_AUTHORITY
Description
URL_AUTHORITY is a row function that returns the authority portion of a URL string. The authority portion of a URL is the part that
has the information on how to locate and connect to the server.
Syntax
URL_AUTHORITY(URL_string)
Return Value
Returns the authority portion of a URL as a Text value, or NULL if the input string is not a valid URL.
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment].
The authority portion of the URL contains the host information, which can be specified as a domain name (www.workday.com), a
host name (localhost), or an IP address (127.0.0.1). The host information can be preceeded by optional user information
terminated with @ (for example, username:password@workday.com), and followed by an optional port number preceded by a
colon (for example, localhost:8001).
Examples
Return the authority portion of URL string values in the referrer field:
URL_AUTHORITY([referrer])
URL_AUTHORITY("http://user:password@mycompany.com:8012/mypage.html") returns
user:password@mycompany.com:8012.
1.17.12.2 | URL_FRAGMENT
Description
URL_FRAGMENT is a row function that returns the fragment portion of a URL string.
Syntax
URL_FRAGMENT(URL_string)
Return Value
Returns the fragment portion of a URL as a Text value, NULL if the URL or does not contain a fragment, or NULL if the input string is
not a valid URL.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 168/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment].
The optional fragment portion of the URL is separated by a hash mark (#) and provides direction to a secondary resource, such as a
heading or anchor identifier.
Examples
Return the fragment portion of URL string values in the request field:
URL_FRAGMENT([request])
1.17.12.3 | URL_HOST
Description
URL_HOST is a row function that returns the host, domain, or IP address portion of a URL string.
Syntax
URL_HOST(URL_string)
Return Value
Returns the host portion of a URL as a Text value, or NULL if the input string is not a valid URL.
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment].
The authority portion of the URL contains the host information, which can be specified as a domain name (www.workday.com), a
host name (localhost), or an IP address (127.0.0.1).
Examples
Return the host portion of URL string values in the referrer field:
URL_HOST([referrer])
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 169/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.12.4 | URL_PATH
Description
URL_PATH is a row function that returns the path portion of a URL string.
Syntax
URL_PATH(URL_string)
Return Value
Returns the path portion of a URL as a Text value, NULL if the URL or does not contain a path, or NULL if the input string is not a
valid URL.
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment].
The optional path portion of the URL is a sequence of resource location segments separated by a forward slash (/), conceptually
similar to a directory path.
Examples
Return the path portion of URL string values in the request field:
URL_PATH([request])
1.17.12.5 | URL_PORT
Description
URL_PORT is a row function that returns the port portion of a URL string.
Syntax
URL_PORT(URL_string)
Return Value
Returns the port portion of a URL as an Integer value. If the URL does not specify a port, then returns -1. If the input string is not a
valid URL, returns NULL.
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment].
The authority portion of the URL contains the host information, which can be specified as a domain name (www.workday.com), a
host name (localhost), or an IP address (127.0.0.1). The host information can be followed by an optional port number preceded
by a colon (for example, localhost:8001).
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 170/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
Return the port portion of URL string values in the referrer field:
URL_PORT([referrer])
1.17.12.6 | URL_PROTOCOL
Description
URL_PROTOCOL is a row function that returns the protocol (or URI scheme name) portion of a URL string.
Syntax
URL_PROTOCOL(URL_string)
Return Value
Returns the protocol portion of a URL as a Text value, or NULL if the input string is not a valid URL.
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment]
The protocol portion of a URL consists of a sequence of characters beginning with a letter and followed by any combination of
letter, number, plus (+), period (.), or hyphen (-) characters, followed by a colon (:). For example: http:, ftp:, mailto:
Examples
Return the protocol portion of URL string values in the referrer field:
URL_PROTOCOL([referrer])
1.17.12.7 | URL_QUERY
Description
URL_QUERY is a row function that returns the query portion of a URL string.
Syntax
URL_QUERY(URL_string)
Return Value
Returns the query portion of a URL as a Text value, NULL if the URL or does not contain a query, or NULL if the input string is not a
valid URL.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 171/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input Parameters
URL_string
Required. A field or expression that returns a Text value in URI (uniform resource identifier) format of:
protocol:authority[/path][?query][#fragment].
The optional query portion of the URL is separated by a question mark (?) and typically contains an unordered list of key=value
pairs separated by an ampersand (&) or semicolon (;).
Examples
Return the query portion of URL string values in the request field:
URL_QUERY([request])
1.17.12.8 | URLDECODE
Description
URLDECODE is a row function that decodes a Text value that has been encoded with the application/x-www-form-urlencoded
media type. URL encoding, also known as percent-encoding, is a mechanism for encoding information in a Uniform Resource
Identifier (URI). When sent in an HTTP GET request, application/x-www-form-urlencoded data is included in the query
component of the request URI. When sent in an HTTP POST request, the data is placed in the body of the message, and the name
of the media type is included in the message Content-Type header.
Syntax
URLDECODE(URL_string)
Return Value
Returns a value of type Text with characters decoded as follows:
%20 space
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 172/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input Parameters
URL_string
Required. A field or expression that returns a Text value. It is assumed that all characters in the input string are one of the
following: lower-case letters (a-z), upper-case letters (A-Z), numeric digits (0-9), or the hyphen (-), comma (,), underscore (_),
period (.) or asterisk (*) character. The percent character (%) is allowed, but is interpreted as the start of a special escaped
sequence. The plus character (+) is allowed, but is interpreted as a space character.
Examples
Decode the values of the url_query field:
URLDECODE([url_query])
1.17.13.1 | AVG
Description
AVG is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the average of all valid
numeric values in the group. It sums all values in the group and divides by the number of valid (NOT NULL) rows. You can use AVG
to calculate moving averages.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (average for this function) in each group.
Syntax
AVG(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
RANGE BETWEEN value PRECEDING AND CURRENT ROW |
ROWS win_boundary | BETWEEN win_boundary AND win_boundary
)
UNBOUNDED PRECEDING
value PRECEDING
UNBOUNDED FOLLOWING
value FOLLOWING
CURRENT ROW
Return Value
Returns a value of type Numeric or Double depending on the type of input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within an AVG expression.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 173/200
3/11/23, 9:33 PM Workday® Administrator Guide
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency. However, you must use a numeric field type, such as Integer
or Numeric when you use the RANGE clause.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
ROWS | RANGE
Required. The ROWS and RANGE clauses define the specific number of rows (relative to the current row) within the partition by
specifying a window frame. You define the window frame by specifying start and end points within the partition, known as window
boundaries. The window frame is the set of input rows in each partition over which to calculate the aggregate expression (average
for this function). The window frame can include one, several, or all rows of the partition.
Both ROWS and RANGE specify the range of rows relative to the current row, but RANGE operates logically on values (logical
association) and ROWS operates physically on rows in the dataset (physical association).
RANGE limits the window frame to contain rows that have their values within the specified range, relative to the current value. ROWS
limits the window frame to contain rows that are physically next to the current row.
Use RANGE to define absolute window boundaries, such as the past 3 months or year to date. When you use RANGE, the ORDER BY
clause must use a numeric field type, such as Integer or Numeric.
Example: Suppose you have an Integer field called MonthNum that represents the number of the month in the year (values 1 to 12).
To specify all values from the past 3 months, you would order by MonthNum and use RANGE BETWEEN 2 PRECEDING AND
CURRENT ROW. This RANGE clause includes the current month and the previous 2 months, resulting in 3 months total.
Note: When you publish a dataset that contains a window function using RANGE, the number of rows in the window must be 1000 or
less. If a particular window exceeds 1000 rows, the publish job fails.
win_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A PRECEDING clause defines a window boundary that is lower than the current row (the number of rows to include before the
current row). The FOLLOWING clause defines a window boundary that is greater than the current row (the number of rows to include
after the current row).
If you specify only 1 window boundary, then Workday uses the current row as the other boundary in the window frame (either the
upper or lower boundary depending on the expression syntax). The UNBOUNDED keyword includes all rows in the direction specified.
When you need to specify both a start and end of a window frame, use the BETWEEN and AND keywords.
When specifying a specific number of rows, the value must be 100 or less.
Example: ROWS 2 PRECEDING means that the window is 3 rows in size, starting with 2 rows preceding until and including the
current row.
Example: ROWS UNBOUNDED FOLLOWING means that the window starts with the current row and includes the current row and all
rows that come after the current row.
Examples
You can calculate the moving average (rolling average or running average) sales for each employee:
You can calculate the overall average sales for every row in the partition, regardless of the fields in the ORDER BY clause:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 174/200
3/11/23, 9:33 PM Workday® Administrator Guide
The Month field must be a numeric field type, such as Integer or Numeric.
The Year field must be a numeric field type, such as Integer or Numeric.
1.17.13.2 | COUNT
Description
COUNT is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the total number of valid
rows (NOT NULL) in the group. You can use COUNT together with other functions to calculate cumulative aggregates.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (count for this function) in each group.
Syntax
COUNT(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
RANGE BETWEEN value PRECEDING AND CURRENT ROW |
ROWS win_boundary | BETWEEN win_boundary AND win_boundary
)
UNBOUNDED PRECEDING
value PRECEDING
UNBOUNDED FOLLOWING
value FOLLOWING
CURRENT ROW
Return Value
Returns a value of type Long.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within an COUNT expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 175/200
3/11/23, 9:33 PM Workday® Administrator Guide
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency. However, you must use a numeric field type, such as Integer
or Numeric when you use the RANGE clause.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
ROWS | RANGE
Required. The ROWS and RANGE clauses define the specific number of rows (relative to the current row) within the partition by
specifying a window frame. You define the window frame by specifying start and end points within the partition, known as window
boundaries. The window frame is the set of input rows in each partition over which to calculate the aggregate expression (count for
this function). The window frame can include one, several, or all rows of the partition.
Both ROWS and RANGE specify the range of rows relative to the current row, but RANGE operates logically on values (logical
association) and ROWS operates physically on rows in the dataset (physical association).
RANGE limits the window frame to contain rows that have their values within the specified range, relative to the current value. ROWS
limits the window frame to contain rows that are physically next to the current row.
Use RANGE to define absolute window boundaries, such as the past 3 months or year to date. When you use RANGE, the ORDER BY
clause must use a numeric field type, such as Integer or Numeric.
Example: Suppose you have an Integer field called MonthNum that represents the number of the month in the year (values 1 to 12).
To specify all values from the past 3 months, you would order by MonthNum and use RANGE BETWEEN 2 PRECEDING AND
CURRENT ROW. This RANGE clause includes the current month and the previous 2 months, resulting in 3 months total.
Note: When you publish a dataset that contains a window function using RANGE, the number of rows in the window must be 1000 or
less. If a particular window exceeds 1000 rows, the publish job fails.
win_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A PRECEDING clause defines a window boundary that is lower than the current row (the number of rows to include before the
current row). The FOLLOWING clause defines a window boundary that is greater than the current row (the number of rows to include
after the current row).
If you specify only 1 window boundary, then Workday uses the current row as the other boundary in the window frame (either the
upper or lower boundary depending on the expression syntax). The UNBOUNDED keyword includes all rows in the direction specified.
When you need to specify both a start and end of a window frame, use the BETWEEN and AND keywords.
When specifying a specific number of rows, the value must be 100 or less.
Example: ROWS 2 PRECEDING means that the window is 3 rows in size, starting with 2 rows preceding until and including the
current row.
Example: ROWS UNBOUNDED FOLLOWING means that the window starts with the current row and includes the current row and all
rows that come after the current row.
Examples
You can calculate the moving count (running count or rolling count) of sales for each employee:
You can calculate the overall count of sales for every row in the partition, regardless of the fields in the ORDER BY clause:
The Month field must be a numeric field type, such as Integer or Numeric.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 176/200
3/11/23, 9:33 PM Workday® Administrator Guide
The Year field must be a numeric field type, such as Integer or Numeric.
1.17.13.3 | FIRST
Description
FIRST is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the value from the first
row in the group.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (first for this function) in each group.
Syntax
FIRST(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
ROWS start_window_boundary
)
UNBOUNDED PRECEDING
Return Value
Returns a value of the same type as input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within a FIRST expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
ROWS
Required. The ROWS clause defines the specific number of rows (relative to the current row) within the partition by specifying a
window frame. You define the window frame by specifying start and end points within the partition, known as window boundaries.
The window frame is the set of input rows in each partition, relative to the current row, over which to calculate the aggregate
expression (first for this function). The window frame can include one, several, or all rows of the partition.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 177/200
3/11/23, 9:33 PM Workday® Administrator Guide
window_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A PRECEDING clause defines a window boundary that is lower than the current row (the number of rows to include before the
current row). If you specify only 1 window boundary, then Workday uses the current row as the last row in the window frame (the
upper boundary). The UNBOUNDED keyword includes all rows in the direction specified.
1.17.13.4 | LAG
Description
LAG is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the value of a field in the
row at the specified offset before (above) the current row in the group.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (lag for this function) in each group.
Syntax
LAG(input_field, offset, default_value) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
)
Return Value
Returns one value per row of the same type as the input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can specify any field type.
offset
Optional. The number of rows before the current row whose value to return. Must be a literal number greater than or equal to zero
(0) and less than or equal to 100. If you don't specify the offset, Workday uses the value of 1.
default_value
Optional. The value this function returns when the offset row is outside the currently defined window or when the value in the offset
row is NULL. default_value must be the same type as input_field. If you don't specify a default value, Workday uses the value of
NULL.
OVER()
Required. OVER must be used within a LAG expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 178/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
Example: You have a dataset with these rows and fields.
You can order the rows for each employee in ascending (ASC) order by the effective date (Eff_Date) field, so the most recent salary
comes first in each partition.
Use this expression in the Salary_Increase field to calculate the change in salary between each change in effective date:
Related Information
Reference
Workday 32 What’s New Post: Prism Analytics Data Preparation
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 179/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.13.5 | LAST
Description
LAST is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the value from the last row
in the group.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (last for this function) in each group.
Syntax
LAST(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
ROWS BETWEEN start_window_boundary AND end_window_boundary
)
CURRENT ROW
UNBOUNDED FOLLOWING
Return Value
Returns a value of the same type as input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within a LAST expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
ROWS
Required. The ROWS clause defines the specific number of rows (relative to the current row) within the partition by specifying a
window frame. You define the window frame by specifying start and end points within the partition, known as window boundaries.
The window frame is the set of input rows in each partition, relative to the current row, over which to calculate the aggregate
expression (last for this function). The window frame can include one, several, or all rows of the partition.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 180/200
3/11/23, 9:33 PM Workday® Administrator Guide
window_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A FOLLOWING clause defines a window boundary that is after the current row (the number of rows to include after the current row).
Workday uses the current row as the first row in the window frame (the lower boundary). The UNBOUNDED keyword includes all rows
in the direction specified.
1.17.13.6 | LEAD
Description
LEAD is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the value of a field in the
row at the specified offset after (below) the current row in the group.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (lead for this function) in each group.
Syntax
LEAD(input_field, offset, default_value) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
)
Return Value
Returns one value per row of the same type as the input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can specify any field type.
offset
Optional. The number of rows after the current row whose value to return. Must be a literal number greater than or equal to zero (0)
and less than or equal to 100. If you don't specify the offset, Workday uses the value of 1.
default_value
Optional. The value this function returns when the offset row is outside the currently defined window or when the value in the offset
row is NULL. default_value must be the same type as input_field. If you don't specify a default value, Workday uses the value of
NULL.
OVER()
Required. OVER must be used within a LEAD expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 181/200
3/11/23, 9:33 PM Workday® Administrator Guide
Examples
Example: You have a dataset with these rows and fields.
You can order the rows for each employee in descending (DESC) order by the effective date (Eff_Date) field, so the most recent
salary comes first in each partition.
Use this expression in the Salary_Increase field to calculate the change in salary between each change in effective date:
Related Information
Reference
Workday 32 What’s New Post: Prism Analytics Data Preparation
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 182/200
3/11/23, 9:33 PM Workday® Administrator Guide
1.17.13.7 | MAX
Description
MAX is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the maximum (highest)
value in the group.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (maximum for this function) in each group.
Syntax
MAX(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
RANGE BETWEEN value PRECEDING AND CURRENT ROW |
ROWS win_boundary | BETWEEN win_boundary AND win_boundary
)
UNBOUNDED PRECEDING
value PRECEDING
UNBOUNDED FOLLOWING
value FOLLOWING
CURRENT ROW
Return Value
Returns a value of type Numeric or Double depending on the type of input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within a MAX expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency. However, you must use a numeric field type, such as Integer
or Numeric when you use the RANGE clause.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 183/200
3/11/23, 9:33 PM Workday® Administrator Guide
ROWS | RANGE
Required. The ROWS and RANGE clauses define the specific number of rows (relative to the current row) within the partition by
specifying a window frame. You define the window frame by specifying start and end points within the partition, known as window
boundaries. The window frame is the set of input rows in each partition over which to calculate the aggregate expression
(maximum for this function). The window frame can include one, several, or all rows of the partition.
Both ROWS and RANGE specify the range of rows relative to the current row, but RANGE operates logically on values (logical
association) and ROWS operates physically on rows in the dataset (physical association).
RANGE limits the window frame to contain rows that have their values within the specified range, relative to the current value. ROWS
limits the window frame to contain rows that are physically next to the current row.
Use RANGE to define absolute window boundaries, such as the past 3 months or year to date. When you use RANGE, the ORDER BY
clause must use a numeric field type, such as Integer or Numeric.
Example: Suppose you have an Integer field called MonthNum that represents the number of the month in the year (values 1 to 12).
To specify all values from the past 3 months, you would order by MonthNum and use RANGE BETWEEN 2 PRECEDING AND
CURRENT ROW. This RANGE clause includes the current month and the previous 2 months, resulting in 3 months total.
Note: When you publish a dataset that contains a window function using RANGE, the number of rows in the window must be 1000 or
less. If a particular window exceeds 1000 rows, the publish job fails.
win_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A PRECEDING clause defines a window boundary that is lower than the current row (the number of rows to include before the
current row). The FOLLOWING clause defines a window boundary that is greater than the current row (the number of rows to include
after the current row).
If you specify only 1 window boundary, then Workday uses the current row as the other boundary in the window frame (either the
upper or lower boundary depending on the expression syntax). The UNBOUNDED keyword includes all rows in the direction specified.
When you need to specify both a start and end of a window frame, use the BETWEEN and AND keywords.
When specifying a specific number of rows, the value must be 100 or less.
Example: ROWS 2 PRECEDING means that the window is 3 rows in size, starting with 2 rows preceding until and including the
current row.
Example: ROWS UNBOUNDED FOLLOWING means that the window starts with the current row and includes the current row and all
rows that come after the current row.
Examples
Example: You have a dataset with these rows and fields.
You can calculate the highest change in compensation (Comp Change field) for each supervisory org in each quarter.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 184/200
3/11/23, 9:33 PM Workday® Administrator Guide
To ensure that Workday returns the same value for every row in a partition, order the rows in descending (DESC) order by the same
field as the input field so the highest compensation change comes first in each partition.
Related Information
Reference
Workday 32 What’s New Post: Prism Analytics Dataset Window Functions
1.17.13.8 | MIN
Description
MIN is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the minimum (lowest) value
in the group.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (minimum for this function) in each group.
Syntax
MIN(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
RANGE BETWEEN value PRECEDING AND CURRENT ROW |
ROWS win_boundary | BETWEEN win_boundary AND win_boundary
)
UNBOUNDED PRECEDING
value PRECEDING
UNBOUNDED FOLLOWING
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 185/200
3/11/23, 9:33 PM Workday® Administrator Guide
value FOLLOWING
CURRENT ROW
Return Value
Returns a value of type Numeric or Double depending on the type of input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within a MIN expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency. However, you must use a numeric field type, such as Integer
or Numeric when you use the RANGE clause.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
ROWS | RANGE
Required. The ROWS and RANGE clauses define the specific number of rows (relative to the current row) within the partition by
specifying a window frame. You define the window frame by specifying start and end points within the partition, known as window
boundaries. The window frame is the set of input rows in each partition over which to calculate the aggregate expression
(minimum for this function). The window frame can include one, several, or all rows of the partition.
Both ROWS and RANGE specify the range of rows relative to the current row, but RANGE operates logically on values (logical
association) and ROWS operates physically on rows in the dataset (physical association).
RANGE limits the window frame to contain rows that have their values within the specified range, relative to the current value. ROWS
limits the window frame to contain rows that are physically next to the current row.
Use RANGE to define absolute window boundaries, such as the past 3 months or year to date. When you use RANGE, the ORDER BY
clause must use a numeric field type, such as Integer or Numeric.
Example: Suppose you have an Integer field called MonthNum that represents the number of the month in the year (values 1 to 12).
To specify all values from the past 3 months, you would order by MonthNum and use RANGE BETWEEN 2 PRECEDING AND
CURRENT ROW. This RANGE clause includes the current month and the previous 2 months, resulting in 3 months total.
Note: When you publish a dataset that contains a window function using RANGE, the number of rows in the window must be 1000 or
less. If a particular window exceeds 1000 rows, the publish job fails.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 186/200
3/11/23, 9:33 PM Workday® Administrator Guide
win_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A PRECEDING clause defines a window boundary that is lower than the current row (the number of rows to include before the
current row). The FOLLOWING clause defines a window boundary that is greater than the current row (the number of rows to include
after the current row).
If you specify only 1 window boundary, then Workday uses the current row as the other boundary in the window frame (either the
upper or lower boundary depending on the expression syntax). The UNBOUNDED keyword includes all rows in the direction specified.
When you need to specify both a start and end of a window frame, use the BETWEEN and AND keywords.
When specifying a specific number of rows, the value must be 100 or less.
Example: ROWS 2 PRECEDING means that the window is 3 rows in size, starting with 2 rows preceding until and including the
current row.
Example: ROWS UNBOUNDED FOLLOWING means that the window starts with the current row and includes the current row and all
rows that come after the current row.
Examples
Example: You have a dataset with these rows and fields.
You can calculate the lowest change in compensation (Comp Change field) for each supervisory org in each quarter.
To ensure that Workday returns the same value for every row in a partition, order the rows in ascending (ASC) order by the same
field as the input field so the lowest compensation change comes first in each partition.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 187/200
3/11/23, 9:33 PM Workday® Administrator Guide
Related Information
Reference
Workday 32 What’s New Post: Prism Analytics Dataset Window Functions
1.17.13.9 | RANK
Description
RANK is a window aggregate function used to assign a ranking number to each row in a group. If multiple rows have the same
ranking value (there's a tie), then Workday assigns the same rank value to the tied rows and skips the subsequent rank position.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups.
The ORDER BY clause determines how to order the rows in the partition before they're ranked.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (rank for this function) in each group. The ranked rows in each group start at 1.
Syntax
RANK() OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
)
Return Value
Returns a value of type Integer.
Input Parameters
OVER()
Required. OVER must be used within a RANK expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups together into a single
partition all records that have the same value for Month.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 188/200
3/11/23, 9:33 PM Workday® Administrator Guide
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
Examples
Example: You have a dataset with these rows and fields.
Goh 11/30/2018 60
Freeman 11/30/2018 60
Smith 11/30/2018 60
You can rank the sales for each employee in descending order, so the highest sales is given the ranking of 1. Use this expression in
the Rank Sales by Employee field:
Goh 11/30/2018 60 3
Freeman 11/30/2018 60 3
Smith 11/30/2018 60 3
You can also rank the sales for each date in descending order, so the highest sales is given the ranking of 1. Use this expression in
the Rank Sales by Date field:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 189/200
3/11/23, 9:33 PM Workday® Administrator Guide
Goh 11/30/2018 60 1
Freeman 11/30/2018 60 1
Smith 11/30/2018 60 1
Notice that tied values are given the same rank number and the following rank position is skipped.
1.17.13.10 | ROW_NUMBER
Description
ROW_NUMBER is a window aggregate function that partitions rows into groups, orders rows by a field, and assigns a unique,
sequential number to each row in a group, starting at 1 for the first row in each group. ROW_NUMBER always assigns a unique value
to each row in a group. You might want to use ROW_NUMBER to create a unique ID for each row in your dataset.
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups.
The ORDER BY clause determines how to order the rows in the partition before they're assigned a sequential number.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (row numbering for this function) in each group. The numbered rows in each group
start at 1.
Syntax
ROW_NUMBER() OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
)
Return Value
Returns a value of type Integer.
Input Parameters
OVER()
Required. OVER must be used within a ROW_NUMBER expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups together into a single
partition all records that have the same value for Month.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 190/200
3/11/23, 9:33 PM Workday® Administrator Guide
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
Examples
Example: You have a dataset with these rows and fields.
Goh 11/30/2018 60
Freeman 11/30/2018 60
Smith 11/30/2018 60
You can assign a unique ID to the sales of each employee in descending order, so the highest sales is given the ranking of 1. Use
this expression in the Sales Num by Employee field:
Goh 11/30/2018 60 3
Freeman 11/30/2018 60 3
Smith 11/30/2018 60 3
You can also assign a unique ID to the sales for each date in descending order, so the highest sales is given the ranking of 1. Use
this expression in the Sales Num by Date field:
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 191/200
3/11/23, 9:33 PM Workday® Administrator Guide
Goh 11/30/2018 60 1
Freeman 11/30/2018 60 2
Smith 11/30/2018 60 3
You can also use ROW_NUMBER to determine the latest version of every row in a dataset that contains multiple rows per ID. In this
scenario, the dataset requires a date field that represents when the information in that row became current. If you're familiar with
data warehousing concepts, this is a type 2 slowly changing dimension table. You have a dataset with these rows and fields.
To assign the value of 1 to the latest version of each ID, use this expression in the Latest Version field:
You can filter on Latest Version using a Filter Stage in order to return only the latest row for each ID.
1.17.13.11 | SUM
Description
SUM is a window aggregate function that partitions rows into groups, orders rows by a field, and returns the total of all values in the
group. You can use SUM to calculate running totals.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 192/200
3/11/23, 9:33 PM Workday® Administrator Guide
The PARTITION BY clause determines which fields to use to partition a set of input rows into groups. The ORDER BY clause
determines how to order the rows in the partition.
Workday separates the input rows into groups according to the partitioning fields, orders the rows according to the ordering fields,
and then computes the aggregate expression (sum for this function) in each group.
Syntax
SUM(input_field) OVER(
PARTITION BY partitioning_field [, partitioning_field]
ORDER BY ordering_field [ASC | DESC] [, ordering_field [ASC | DESC]]
RANGE BETWEEN value PRECEDING AND CURRENT ROW |
ROWS win_boundary | BETWEEN win_boundary AND win_boundary
)
UNBOUNDED PRECEDING
value PRECEDING
UNBOUNDED FOLLOWING
value FOLLOWING
CURRENT ROW
Return Value
Returns a value of type Numeric, Long, or Double depending on the type of input_field.
Input Parameters
input_field
Required. The field on which to perform the aggregate function. You can use any numeric field or a Currency field.
OVER()
Required. OVER must be used within an SUM expression.
PARTITION BY partitioning_field
Required. Use the PARTITION BY clause to specify 1 or more fields to use to partition a group of input rows. You can specify any
field type except Currency. Example: You specify the Month field as the partitioning field, so Workday groups into a single partition
all records that have the same value for Month.
ORDER BY ordering_field
Required. Use the ORDER BY clause to specify how to order the input rows in the partition using the values in the specified field
within each partition. You can specify any field type except Currency. However, you must use a numeric field type, such as Integer
or Numeric when you use the RANGE clause.
You can use the DESC or ASC keywords to sort in descending order (high to low values, NULLs are last) or ascending order (low to
high values, NULLs are first) for each ordering field. If you don't specify a sort order for an ordering field, Workday automatically
sorts rows in ascending order.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 193/200
3/11/23, 9:33 PM Workday® Administrator Guide
ROWS | RANGE
Required. The ROWS and RANGE clauses define the specific number of rows (relative to the current row) within the partition by
specifying a window frame. You define the window frame by specifying start and end points within the partition, known as window
boundaries. The window frame is the set of input rows in each partition over which to calculate the aggregate expression (sum for
this function). The window frame can include one, several, or all rows of the partition.
Both ROWS and RANGE specify the range of rows relative to the current row, but RANGE operates logically on values (logical
association) and ROWS operates physically on rows in the dataset (physical association).
RANGE limits the window frame to contain rows that have their values within the specified range, relative to the current value. ROWS
limits the window frame to contain rows that are physically next to the current row.
Use RANGE to define absolute window boundaries, such as the past 3 months or year to date. When you use RANGE, the ORDER BY
clause must use a numeric field type, such as Integer or Numeric.
Example: Suppose you have an Integer field called MonthNum that represents the number of the month in the year (values 1 to 12).
To specify all values from the past 3 months, you would order by MonthNum and use RANGE BETWEEN 2 PRECEDING AND
CURRENT ROW. This RANGE clause includes the current month and the previous 2 months, resulting in 3 months total.
Note: When you publish a dataset that contains a window function using RANGE, the number of rows in the window must be 1000 or
less. If a particular window exceeds 1000 rows, the publish job fails.
win_boundary
Required. The window boundaries define the start and end points of the window frame. Window boundaries are relative to the
current row.
A PRECEDING clause defines a window boundary that is lower than the current row (the number of rows to include before the
current row). The FOLLOWING clause defines a window boundary that is greater than the current row (the number of rows to include
after the current row).
If you specify only 1 window boundary, then Workday uses the current row as the other boundary in the window frame (either the
upper or lower boundary depending on the expression syntax). The UNBOUNDED keyword includes all rows in the direction specified.
When you need to specify both a start and end of a window frame, use the BETWEEN and AND keywords.
When specifying a specific number of rows, the value must be 100 or less.
Example: ROWS 2 PRECEDING means that the window is 3 rows in size, starting with 2 rows preceding until and including the
current row.
Example: ROWS UNBOUNDED FOLLOWING means that the window starts with the current row and includes the current row and all
rows that come after the current row.
Examples
You can calculate the running total (rolling sum or moving sum) of sales for each employee:
You can calculate the overall sum (total sales) for every row in the partition, regardless of the fields in the ORDER BY clause:
The Month field must be a numeric field type, such as Integer or Numeric.
The Year field must be a numeric field type, such as Integer or Numeric.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 194/200
3/11/23, 9:33 PM Workday® Administrator Guide
Regular expressions vary in complexity using a combination of basic constructs to describe a string matching pattern. This
reference describes the most common regular expression matching patterns, but is not a comprehensive list.
Regular expressions, also referred to as regex or regexp, are a standardized collection of special characters and constructs used
for matching strings of text. They provide a flexible and precise language for matching particular characters, words, or patterns of
characters.
Prism Analytics regular expressions are based on the pattern matching syntax of the Java programming language. For more in
depth information on writing valid regular expressions, refer to the Java regular expression pattern documentation.
You can use regular expressions in Prism calculated field expressions that use either the REGEX or REGEX_REPLACE functions.
This section describes the regular expression syntax for referring to literal characters, special characters, nonprintable characters
(such as a tab or a newline), and special character escaping.
Literal Characters
The most basic form of pattern matching is the match of literal characters. If the regular expression is foo and the input string is
foo, the match will succeed because the strings are identical.
Special Characters
Certain characters are reserved for special use in regular expressions. These special characters are called metacharacters. If you
want to use special characters as literal characters, you must escape them.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 195/200
3/11/23, 9:33 PM Workday® Administrator Guide
Precede the special character with a \ (backslash character). Example: to specify an asterisk as a literal character instead
of a quantifier, use \*.
Enclose the special characters within \Q (starting quote) and \E (ending quote). Everything between \Q and \E is then
treated as literal characters.
To escape literal double-quotes in a REGEX() expression, double the double-quotes (""). Example: to extract the inches
portion from a height field where example values are 6'2", 5'11":
REGEX([height], "\'(\d)+""$")
NonPrinting Characters
You can use special character sequence constructs to specify nonprintable characters in a regular expression. Some of the most
commonly used constructs are:
Construct Matches
\n newline character
\t tab character
You can use a hyphen inside a character class to specify a range of characters. Example: [a-z] matches a single lower-case letter
between a and z. You can also use more than 1 range, or a combination of ranges and single characters. Example: [0-9X]
matches a numeric digit or the letter X. The order of the characters and the ranges doesn't matter.
A caret following an opening bracket specifies characters to exclude from a match. For example, [^abc] matches any character
except a, b, or c.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 196/200
3/11/23, 9:33 PM Workday® Administrator Guide
. Matches any single character (except newline) .at matches "cat", "hat", and also "bat" in the
phrase "batch files"
\d Matches any digit character (equivalent to [0- \d matches "3" in "C3PO" and "2" in "file_2.txt"
9])
\D Matches any non-digit character (equivalent to \D matches "S" in "900S" and "Q" in "Q45"
[^0-9])
\s Matches any single white-space character \sbook matches "book" in "blue book" but
(equivalent to [ \t\n\x0B\f\r]) nothing in "notebook"
\S Matches any single non-white-space character \Sbook matches "book" in "notebook" but
nothing in "blue book"
\w Matches any alphanumeric character, including r\w* matches "rm" and "root"
underscore (equivalent to [A-Za-z0-9_])
\W Matches any non-alphanumeric character \W matches "&" in "stmd &", "%" in "100%", and
(equivalent to [^A-Za-z0-9_]) "$" in "$HOME"
Construct Description
You can use boundary matching constructs to specify where in a string to apply a matching pattern. For example, you can search
for a particular pattern within a word boundary, or search for a pattern at the beginning or end of a line.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 197/200
3/11/23, 9:33 PM Workday® Administrator Guide
^ Matches from the beginning of a line (multi-line ^172 matches the "172" in IP address
matches are currently not supported) "172.18.1.11" but not in "192.172.2.33"
$ Matches from the end of a line (multi-line d$ matches the "d" in "maid" but not in "made"
matches are currently not supported)
\b Matches within a word boundary \bis\b matches the word "is" in "this is my
island", but not the "is" part of "this" or "island".
\bis matches both "is" and the "is" in "island",
but not in "this".
\B Matches within a non-word boundary \Bb matches "b" in "sbin" but not in "bash"
Quantifier Constructs
Quantifiers specify how often the preceding regular expression construct should match. The classes of quantifiers are:
Greedy
Reluctant
Possessive
The difference between greedy, reluctant, and possessive quantifiers involves what part of the string to try for the initial match, and
how to retry if the initial attempt doesn't produce a match.
By default, quantifiers are greedy. A greedy quantifier first tries for a match with the entire input string. If that produces a match,
then it considers the match a success and the engine can move on to the next construct in the regular expression. If the first try
doesn't produce a match, the engine backs-off 1 character at a time until it finds a match. So a greedy quantifier checks for
possible matches in order from the longest possible input string to the shortest possible input string, recursively trying from right
to left.
Adding a ? (question mark) to a greedy quantifier makes it reluctant. A reluctant quantifier first tries for a match from the beginning
of the input string, starting with the shortest possible piece of the string that matches the regex construct. If that produces a
match, then it considers the match a success and the engine can move on to the next construct in the regular expression. If the
first try doesn't produce a match, the engine adds 1 character at a time until it finds a match. So a reluctant quantifier checks for
possible matches in order from the shortest possible input string to the longest possible input string, recursively trying from left to
right.
Adding a + (plus sign) to a greedy quantifier makes it possessive. A possessive quantifier is like a greedy quantifier on the first
attempt (it tries for a match with the entire input string). The difference is that unlike a greedy quantifier, a possessive quantifier
doesn't retry a shorter string if it doesn't find a match. If the initial match fails, the possessive quantifier reports a failed match. It
doesn't make any more attempts.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 198/200
3/11/23, 9:33 PM Workday® Administrator Guide
+ +? ++ Matches the previous character if+ matches "if", "iff" in "diff", but
or construct 1 or more times. nothing in "print"
{n} {n}? {n}+ Matches the previous character o{2} matches "oo" in "lookup"
or construct exactly n times. and the first 2 o's in "fooooo" but
nothing in "mount"
{n,} {n,}? {n,}+ Matches the previous character o{2,} matches "oo" in "lookup"
or construct at least n times. all 5 o's in "fooooo" but nothing
in "mount"
{n,m} {n,m}? {n,m}+ Matches the previous character F{2,4} matches "FF" in
or construct at least n times, but "#FF0000" and the last 4 F's in
no more than m times. "#FFFFFF"
You can use a pair of parentheses around a subpattern in a regular expression to define a group. You can use regex groups to:
Group Numbering
A regular expression can have more than 1 group, and the groups can be nested. The groups are numbered 1-n from left to right,
starting with the first opening parenthesis. There is always an implicit group zero (0), which contains the entire match. Example:
(a(b*))+(c)
group 1: (a(b*))
group 2: (b*)
group 3: (c)
Suppose you have a field name called email that contains email addresses with this pattern: username@provider.domain. You can
use the REGEX function to return just the provider portion of the email address from the email field:
REGEX([email],"^[a-zA-Z0-9._%+-]+@([a-zA-Z0-9._-]+)\.[a-zA-Z]{2,4}$")
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 199/200
3/11/23, 9:33 PM Workday® Administrator Guide
Input string
Matching regex
Replacement regex
You can use capturing groups to capture backreferences, but the entire match is always returned.
Suppose you want to match a pair of HTML tags and their enclosed text. You could capture the opening tag into a capturing group,
and then use a backreference to match the corresponding closing tag:
(<([A-Z][A-Z0-9]*)\b[^>]*>.*?</\2>)
Group 1 contains the outermost parentheses and captures the entire string.
Group 2 captures the string matched by [A-Z][A-Z0-9]*.
You can then refer to group 2 using the \2 backreference to match the corresponding closing HTML tag.
When you use the REGEX_REPLACE function, you can use a backreference to refer to a capturing group in the previous regular
expression. The syntax is slightly different when you use a backreference to refer to a group in the previous regex. Use a dollar sign
($) before the group number, such as $1 to specify a backreference to group 1 of the previous expression.
Suppose you have a phone_number field where the values are formatted as xxx.xxx.xxxx. The following example matches the
values in phone_number and replaces them with values formatted as (xxx) xxx-xxxx:
REGEX_REPLACE([phone_number],"([0-9]{3})\.([[0-9]]{3})\.([[0-9]]{4})","\($1\) $2-$3")
Notice the backreferences in the replacement expression. They refer to the capturing groups of the previous matching expression.
Non-Capturing Groups
In some cases, you might want to use parenthesis to group subpatterns, but not capture text. A non-capturing group starts with (?:
(a question mark and colon following the opening parenthesis). For example, h(?:a|i|o)t matches hat or hit or hot, but does
not capture the a, i, or o from the subexpression.
https://doc.workday.com/internal/api/webapp/print/3b3760bb-3c9f-4606-b4ad-00fc9795a442 200/200